• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • Tagged with
  • 24
  • 24
  • 24
  • 24
  • 13
  • 8
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The effects of operating conditions on the hydrodynamic lubricant film thickness at the piston-ring/cylinder liner interface of a firing diesel engine

Sochting, Sven January 2009 (has links)
Conventional investigations into the performance of piston-rings in internal combustion engines are performed at relatively low speeds and consider only steady state operation conditions. Loss of power in internal combustion (IC) engines is becoming an increasing issue when they are operated at high engine speeds. This project is directed at developing technology to establish whether this phenomenon is influenced by a lubricant related effect. In a normal operating environment automotive engines typically operate under transient operating conditions. These rapid changes in operation conditions may influence the thickness of the hydrodynamic film which lubricates the interfaces between the piston-ring and liner. During this project two capacitance methods were employed in a fired compression ignition engine, an amplitude modulated (AM) system originally developed by Grice and a new "high speed" capacitance technique based on a frequency modulated principle. The first part of this thesis is concerned with the development and implementation of a new apparatus suitable for measuring the thickness and extent of the hydrodynamic oil film which lubricates the piston-rings and liner. The nature of the working principle of the high speed capacitance measurement system required the design, manufacture, assembly and commissioning of a novel dynamic calibration apparatus. The new system can also be used for static calibration (AM system) of capacitance based distance measuring systems. It uses a manufacturer calibrated closed loop controlled piezo-actuator to present a target relative to the sensor face. Some previous investigations concluded a stable oil film thickness. However, this work shows that there are cyclic variations of the oil film thickness OFT on a stroke to stroke and cycle to cycle basis. A series of measurements was conducted under various fixed speed load points. The effects of using lubricants of different viscosity on the minimum (OFT) between liner and piston ring have been little studied and this work shows that it was possible to speciate measurements of different lubricants. This thesis also describes a measurement of the oil film thickness during abrupt changes in engine operating conditions.
2

Effervescent proliposomes for aerosol delivery to paranasal sinuses

Korale, Aluthweediya K. O. D. January 2016 (has links)
This study aims to design and develop effervescent proliposomes that could disintegrate in water and liberate liposomes, and to investigate the potential suitability of liposomes generated for aerosolization to target paranasal sinuses. Novel effervescent proliposomes prepared with Soya phosphatidylcholine (SPC) and Dipalmitoylphosphatidylcholine (DPPC) successfully generated stable liposomes with an improved disintegration time of less than 5 min. Differences in lipid composition were found to influence liposome size and drug entrapment of the hydrophobic drug Beclometasone dipropionate (BDP). Mannitol-based formulations developed with DPPC:Chol (1:1) produced liposomes of 7.54±0.15 µm with a drug entrapment efficiency of 82.15±8.29%. Addition of the mucoadhesives alginic acid or chitosan to effervescent proliposomes made with SPC was found to hamper BDP entrapment in liposomes. Effervescent proliposomes produced SPC:Chol liposomes that also proved beneficial for entrapment of the hydrophilic drug Xylometazoline hydrochloride (XH). The Pari Sinus (pulsating aerosol technology) and Pari Sprint (non-pulsating technology) nebulizers were used for liposome delivery to a nasal cast. Choice of carrier did not affect the liposome’s ability to withstand shearing. A novel system of a Sar-Gel® (water indicating paste) coated clear nasal cast fixed to a two-stage impinger system was set up to analyze drug deposition within the nasal cast cavity. Sinus drug deposition with effervescent mannitol, DPPC:Chol formulation was observed to be highest at 48.45±2.75 cm2 with pulsation compared to deposition of 35.52±11.11 cm2 without pulsation. Drug distribution studies indicated that the Pari Sinus deposited 10.47±2.9% drug, while the Pari Sprint deposited only 4.6±1.4%. The degree of drug loss was higher with conventional liposomes in the Pari Sinus nebulizer, indicating that the degree of bilayers disruption depended on formulation.
3

Performance of reduced-scale vortex amplifiers used to control glovebox dust

Zhang, Guobin January 2005 (has links)
Ventilation systems for a nuclear plant must have a very high reliability and effectiveness. In this application, fluidic devices have advantages which electro-mechanical and pneumatic devices lack. Fluidic devices will not easily wear out, they have a relatively fast response and in some cases they may be cheaper than an equivalent conventional device. Most importantly, they have fewer moving parts (usually none) so are inherently reliable, so long as the fluidic design is effective. So vortex amplifiers (VXA) are ideal for active ventilation systems where access for maintenance is problematic. From 1995 to 2000, space limitations at Sellafield drove the desire to minimise VXA size and also glovebox size. Recently completed plant expansions use a smaller version of VXA produced by scaling geometrically the existing standard model. It is called the mini-VXA. Subsequent performance of the mini-VXA has been disappointing with high oxygen levels noted in the inerted gloveboxes; this required an expensive increase in the inert gas supply rate of gloveboxes to mitigate against fire risk. After doing experiments using a mini-VXA and typical glovebox, the author has confirmed the high 02 levels. The 02 distribution in the glovebox indicates that oxygen is entering the glovebox by the VXA supply ports; against the general direction of flow. The ultimate source of this back leakage is the control port (that is open to atmosphere) and smoke visualisation studies on the mock VXA indicate a mechanism. This is due to separated flow patterns with excessive control port momentum. A temporary solution using an orifice plate and spacing chamber has been shown to reduce essential nitrogen supply to one quarter that without the modification. Addition of the orifice plates enables further reduction in nitrogen use, and the smallest orifice tested performs best with no discernable cost in pressure drop and therefore fan power. The author also found the following points. The ratio of control port area to supply port area is a critical parameter affecting mixing of the two airstreams. Yet exit port area is unimportant. The ratio of supply port area to exit port area has no influence on discharge coefficient (at least within the scope of current work). It is also identified that the ratio of chamber height to exit port radius does not affect the discharge coefficient or two angle parameters. Doubling chamber height, supply port area and control port area at the same time has a slight effect on the discharge coefficient (attributed partly to a viscous effect), but no effect on the two angle parameters. The chamber height has little effect on Reynolds number. If the supply port area is not too small relative to the exit port, the supply port area will not significantly affect Reynolds number. The use of discharge coefficient and the two angle parameters to characterize VXA performance breaks with the traditional form of dimensionless characteristics that are used for the purpose. Testing these alternate characteristics has enabled the momentum (which dominates control of VXA performance) to be more explicitly expressed in updated design rules.
4

A problem solving strategy based on knowledge-based systems

Gillies, Alan Cameron January 1992 (has links)
The historical development of knowledge based systems (KBS) from artificial intelligence (AT) has led to a number of characteristics which isolate knowledge based systems from the rest of software development. In particular, it has led to the growth of 'stand alone' systems. This thesis argues that this has restricted the use of KBS to a narrow range of problems, and has reduced the effectiveness of the consequent solutions. By considering first a specific problem in some depth, the thesis seeks to develop an alternative approach, where KBS is considered as simply another software technology to be used within an integrated solution. The problem considered is the automatic analysis of photoelastic fringe patterns, and KBS methods are employed alongside conventional image processing techniques to produce an integrated solution. The conventional algorithmic solution is first constructed and evaluated. This solution, having proved partially successful, is then enhanced by the use of KBS techniques to provide a full solution. From this specific example, a framework for integration is derived. This framework is tested in an unrelated application to consider whether the approach adopted has more general utility than one specific class of problem. This problem was the provision of decision support for business planning based upon market research. The resulting strategy and design is described together with details of how the system was implemented under the supervision of the author. The thesis concludes with an evaluation of the work and its conthbution to knowledge in the twin areas of the specific solutions and the underlying methods.
5

Lossless image compression for aerospace non-destructive testing applications

Lin, Xin-Yu January 2004 (has links)
This thesis studies areas of image compression and relevant image processmg techniques with the application to Non-destructive Testing (NDT) images of aircraft components. The research project includes investigation of current data compression techniques and design of efficient compression methods for NDT images. Literature review was done initially to investigate the fundamental principles of data compression and existing methods of lossless and lossy image compression techniques. Such investigation provides not only the theoretical background, but also the comparative benchmarks for the research project. Chapter 2 provides general knowledge of image compression. The basic predictive coding strategy is introduced at the beginning of chapter 3. Fundamental theories of the Integer Wavelet Transform (IWT) can be found in chapter 4. The research projects proposed mainly three innovative methods for lossless compression of NDT images. Namely, the region-based method that employs region­oriented adaptation; the texture-based method that employs a mixed model for the prediction of image regions with strong texture patterns; and a hybrid method that utilizes advantages from both predictive coding and IWT coding. The main philosophy of lossless image compression is to de-correlate the original image data as much as possible by mapping from spatial domain to spatial domain in the predictive coding strategy or from spatial domain to transform domain in the IWT coding strategy. The proposed region-based method aims to achieve the best mapping by adapting the de-correlation to the statistical properties of decomposed regions using the component's CAD model. With the aid of component CAD models to divide the NDT images of aircraft components into different regions based on the material structures, the design of the predictors and the choice of the IWT are optimised according to the specific image features contained in each region having the same material structure. The texture-based method achieves the best de-correlation by using a mixed data model in the region possessing strong texture patterns. A hybrid scheme for lossless compression of the NDT images of aircraft components is presented. The method combines the predictive coding and the IWT. After region-based predictive coding, the IWT is applied to the error images produced for each decomposed region to achieve further image de-correlation by preserving the information contained in the error images with fewer transform coefficients. The main advantages of using the IWT are its multi-resolution nature and lossless property with integer grey level values in images mapped to integer wavelet coefficients. The proposed methods are shown to offer a significantly higher compression ratio than other compression methods. The high compression efficiency is seen to be achieved by not only a combination of the predictive coding and the IWT, but also optimisation in the design of the predictor and the choice of the transform according to the specific image features contained in each region having similar material structures.
6

Development and analysis of hybrid adaptive neuro-fuzzy inference systems for the recognition of weak signals preceding earthquakes

Konstantaras, Anthony J. January 2004 (has links)
Prior to an earthquake, there is energy storage in the seismogenic area, the release of which results in a number of micro-cracks, which in effect produce a weak electric signal. Initially, there is a rapid rise in the number of propagating cracks, which creates a transient electric field. The whole process lasts in the order of several tens of minutes, and the resulting electric signal is considered as an electric earthquake precursor (EEP). Electric earthquake precursor recognition is mainly prevented by the very essence of the signal itself. The nature of the signal, according to the theory of propagating cracks, is usually a very weak electric potential anomaly appearing on the Earth's electric field prior to an earthquake, often unobservable within the severely stronger embedded in noise electric background. Furthermore, EEP signals vary in terms of duration and size making reliable recognition even more difficult. The work described in this thesis incorporates neuro-fuzzy technology for the reliable recognition of EEP signals within the electric field. Neuro-fuzzy networks are neural networks with intrinsic fuzzy logic abilities, i.e. the weights of the neurons in the network define the premise and consequent parameters of a fuzzy inference system. In particular, the adaptive neuro-fuzzy inference system (ANFIS) is used, which has been shown to be effective as a universal approximator that can match any input/output data set, providing the system is adequately trained. An average model for EEP signals has been identified based on a time function describing the evolution of the number of propagating cracks. Pattern recognition is performed by the neural network to identify the average EEP model from within the electric field. The fuzzy nature of the neuro-fuzzy model, though, enables the network to classify as EEPs, signals that are not exactly the same but do approximate the average EEP model. On the other hand, signals that look like EEPs but do not approximate enough the average model are being suppressed preventing false classification. The effectiveness of the proposed network is demonstrated using electrotelluric data recorded in NW Greece in 1995. Following training, testing with unseen data verifies the reliable performance of the model.
7

Automatic image alignment for clinical evaluation of patient setup errors in radiotherapy

Su, QingLang January 2004 (has links)
In radiotherapy, the treatment is typically pursued by irradiating the patient with high energy x-ray beams conformed to the shape of the tumour from multiple directions. Rather than administering the total dose in one session, the dose is often delivered in twenty to thirty sessions. For each session several settings must be reproduced precisely (treatment setup). These settings include machine setup, such as energy, direction, size and shape of the radiation beams as well as patient setup, such as position and orientation of the patient relative to the beams. An inaccurate setup may result in not only recurrence of the tumour but also medical complications. The aim of the project is to develop a novel image processing system to enable fast and accurate evaluation of patient setup errors in radiotherapy by automatic detection and alignment of anatomical features in images acquired during treatment simulation and treatment delivery. By combining various image processing and mathematical techniques, the thesis presents the successful development of an effective approach which includes detection and separation of collimation features for establishment of image correspondence, region based image alignment based on local mutual information, and application of the least-squares method for exhaustive validation to reject outliers and for estimation of global optimum alignment. A complete software tool was developed and clinical validation was performed using both phantom and real radiotherapy images. For the former, the alignment accuracy is shown to be within 0.06 cm for translation and 1.14 degrees for rotation. More significantly, the translation is within the ±0.1 cm machine setup tolerance and the setup rotation can vary between ±1 degree. For the latter, the alignment was consistently found to be similar or better than those based on manual methods. Therefore, a good basis is formed for consistent, fast and reliable evaluation of patient setup errors in radiotherapy.
8

Advancing acoustography by multidimensional signal processing techniques

Bach, Michael January 2006 (has links)
The research presented in this thesis is the investigation into multidimensional signal processing for acoustography. Acoustography is a novel inspection technique similar to x-ray. However, instead of using hazardous ionising radiation, acoustography is based on sound. Inspection data are intensity images of the interior of components under inspection. Acoustography is a novel screening technique to inspect components without physically damaging them. Multidimensional signal processing refers to the processing of inspection data by signal and image processing techniques. The acoustographic imaging system is characterised with focus on signal and image processing. This system characterisation investigates into various degradations and image influencing properties. Signal and image processing techniques are then formally defined for the context of processing acoustographic data. The applicability of denoising and segmentation techniques is demonstrated. Filter primitives are demonstrated to be able to remove certain noise features. Particular focus rests on denoising and segmentation techniques based on physical analogy with diffusion. The diffusivity is locally controlled by data gradient measures. The diffusion technique using certain parameter settings performs the denoising of the inspection data without manipulating true image features. The same algorithm with a different parameter settings is employed to perform segmentation and thus separating fault features from background. Based on response characteristics of the acoustographic system, a data fusion algorithm has been developed to merge multiple observations into one datum, thereby increasing the dynamic range. The two stage algorithm consists of an iterative curve fitting algorithm followed by a reverse calculation using the curve parameter to yield a single observation. The algorithm has been improved further towards robustness to noise. Further, the fusion of denoised data is demonstrated. As a direct result of the work presented, future work is suggested to improve inferring from observation data to the state of the component under investigation. Further work is suggested to improve the understanding of the imaging system and inverse methods are proposed which take into account various particularities of the acoustographic system.
9

South Asian females and technology education : a study of engagement and disengagement in Britain

Mirza, Mehreen Naz January 2002 (has links)
My thesis is concerned with the engagement and disengagement of South Asian girls and women with technology education in Britain. The research arose out of the need to establish whether South Asian girls and women had been included in, and benefited from, the attempts to encourage more girls and women into the fields of science, engineering and technology. Existing theoretical, especially feminist, frameworks for understanding the experiences of girls and women in science, engineering and technology, were largely silent about the experiences of minority ethnic girls and women, especially those of South Asian heritage; their experiences and perspectives were subsumed under an assumed generic female experience, which I have termed 'universal wonian' syndrome. Similarly, existing theoretical discourses for understanding the specific experiences of South Asian girls and women in education and the labour market, were too broad in focus and unable to offer any commentary about their position in relation to specific subjects and/or occupations. My thesis is intended to make a contribution towards assessing whether the initiatives to proniote girls and women into technology are of relevance and applicability to South Asian girls and women. I adopted an 'anti-oppressive' epistemological and methodological framework within which to locate the research process, from initial conceptualisation to final data analysis. In particular I focused on anti-racist, feminist, and Black feminist epistemology and methodology. I utilised both quantitative and qualitative methods, within a reflexive framework for gathering and analysing data, in order to respond better to changing research circumstances.. My thesis is intended to make a contribution to the wider understanding of epistemological and methodological research issues, especially in terms of the applicability of anti-racist, feminist and Black feminist standpoint epistemology. It is intended to contribute especially to our knowledge about ethical concerns which researchers need to be cognisant of from the outset of their research project. Data was gathered and analysed by me using a grounded theory approach, which resulted in my use of a theoretical model proposed by Anthias and Yuval-Davis (1992). This theory is intended to examine the connections between gender and ethnicity in the process of nation-building, but I felt that it could also be used to explain the ways in which gender and ethnicity acted upon the South Asian girls and women in their choice of subject of study and subsequent jobs/occupations. The data analysis revealed that many of the initiatives to encourage girls and women into fields in which they were under-represented, had had very little, if any impact upon the subject and occupational choices of South Asian girls and women in this study, as those initiatives had focused on addressing primarily, if not exclusively gender issues, whereas the lives and decision-making processes of the South Asian girls and women were informed by the experience of a particularly ethnicised-gendered experience. Consequently the thesis moves beyond focusing exclusively on the ways in which South Asian girls and women make choices about technology education and occupations, to a concern with how they make choices about education and work in general, through negotiating with various discourses around questions of gender, ethnicity/race, class and religion.
10

Economic analysis and environmental impact of energy usage in microbusinesses in UK and Kurdistan, Iraq

Azabany, Azad January 2014 (has links)
Over reliance on fossil fuels, rising global population, industrialization, demands for a higher standard of living and transportation have caused alarming damage to the environment. If current trend continues then catastrophic damage to the earth and its environment may not be reversible. There is an urgent need to reduce the use of fossils fuels and substituting it with renewable energy sources such as wind, tidal and hydroelectric. Solar source seems to be the most promising due to its environmental friendly nature, portability and reliability. This source was examined in terms of microbusinesses such as SMEs including hair dressing salon, education centre, fried chicken outlet and printing shop. Small businesses account for a large proportion of the economy. The analysis developed could be applied to small business to show their contribution to the carbon footprint and how this could be reduced using solar energy. The proportions of their current electricity usage that could be substituted with solar cells were calculated. Combined these have a significant impact. These businesses were considered for UK and Iraq with the former being more amenable to solar energy implementation. Analysis of the four SMEs showed that the most energy intensive business was fried chicken take away using a large amount of electricity and the least energy intensive business was the education centre. In the latter in UK 57% of the electricity usage could be replaced by solar energy compared to Kurdistan, which generated a surplus energy that could be fed into the national grid. The gents groom hairdressing and blue apple businesses gave intermediate figures. Parallel conclusions were drawn regarding CO2 emissions released into the atmosphere with education centre being the most environmentally friendly and the fried chicken the least. In addition, a larger public space, an international airport data was analysed and the value of solar replacement demonstrated. The methodology and data analysis approach used may be implemented for other business units and larger public spaces such as hospitals, shopping complexes and football stadiums.

Page generated in 0.1407 seconds