• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 707
  • 707
  • 669
  • 165
  • 110
  • 71
  • 70
  • 62
  • 58
  • 50
  • 46
  • 44
  • 44
  • 44
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

A measurement based study of the acoustics of pipe systems with flow

English, Emmet J. January 2010 (has links)
The focus of this thesis is the measurement of specific aeroacoustic properties in ducts at frequencies below the cut-on frequency of the first higher order mode. A body of measurement results are presented which highlight the effect of flow on some of the aeroacoustic characteristics in ducts as well as describe the aeroacoustic sources of an in-duct orifice and a simple expansion chamber. The results have been compared with published theory where appropriate. Important developments from measurements of the acoustic characteristics of a simple duct with flow include a new experimental method to determine the viscothermal attenuation coefficient. In addition, pressure reflection coefficient measurements of an unflanged duct with flow with two different edge conditions are used in conjunction with a numerical model developed by Gabard [1] to determine the extent of vorticity shed from the duct termination. A novel method is presented for the measurement of aeroacoustic source strengths in ducts with flow. The source is defined in terms of acoustic power and is determined by measuring the acoustic power flux both upstream and downstream of the source region in a duct. The method adopts a plane wave approximation and was assessed experimentally by creating a source in a duct at a number of known frequencies and modifying its magnitude by a known amount. The source measurement technique is applied to an in-duct orifice. The results are used to determine the spectral characteristic and velocity dependence of the source. The results indicate that the duct-to-orifice area ratio has a important effect on the spectral characteristics and velocity dependence of the source. New measurements of the aeroacoustic source strength of a simple flow excited expansion chamber are presented. The results indicate that lock-on flow tones occur when hydrodynamic modes which form in the chamber match the tailpipe resonant frequencies. The results are compared with predictions of a model based on describing function theory
182

Study of surface discharge behaviour at the oil-pressboard interface

Zainuddin, H. January 2013 (has links)
This thesis is concerned with the surface discharge behaviour at the oil-pressboard interface. For large transformers this is classified as a serious failure mode because it can lead to catastrophic failure under normal AC voltage operating conditions. To increase understanding on this failure mode, a surface discharge experiment at the oil-pressboard interface has been conducted on different moisture levels in pressboard by applying a long period of AC voltage stress. The processes in the surface discharge at the oil-pressboard interface until the appearance of a first full discharge have been recognised and correlated with the measured data. The results show that the different moisture levels within the pressboard play an important role on the partial discharge (PD) activity of certain processes. The decreasing trend in the PD data during the surface discharges cannot be treated as a reliable condition monitoring measure of health because it is the key indicator of white marks propagation toward the earth point. The characteristics of full discharge events have been analysed to develop knowledge for condition monitoring of surface discharge at the oil-pressboard interface. Full discharges are corona-like events in which their random occurrences are dominated by accumulated charges on the pressboard surface along the white marks rather than the polarity of applied AC voltage. A 2-D axial symmetry surface discharge model has also been developed using COMSOL Multiphysics, a finite element analysis (FEA) software package. The model considers the pressboard region near the interface (a transition region) as porous, whilst in the bulk region of pressboard as a perfect insulator. The model is developed using continuity equations and coupled with the Poisson’s equation to study the problem in terms of charge transport mechanisms and electric field distributions. The thermal conduction equation is included to study the thermal effects of surface discharge activity at the oil-pressboard interface. The behaviour of surface discharge is studied by validating the simulated surface discharge current pulse with the measured current. The simulation results show that a field dependent molecular ionisation mechanism plays an important role in the streamer propagation during the period of the rising front of the current pulse, whilst during the period of decaying tail of the current pulse, the contribution of an electron attachment process is dominant. The modelling results suggest that degradation marks (white and black marks) are due to high energy over long periods of partial discharge events that lead to thermal degradation at the oil-pressboard interface.
183

Heteropolyacids and non-carbon electrode materials for fuel cell and battery applications

Kourasi, Maria January 2015 (has links)
Heteropolyacids (HPAs) are a group of chemicals that have shown promising results as catalysts during the last decades. Since HPAs have displayed encouraging performance as electrocatalysts in acidic environment, in this project their redox activity in acid and alkaline aqueous electrolytes and their electrocatalytic performance as additives on a bifunctional gas diffusion electrode in alkaline aqueous electrolyte are tested. The results from the electrochemical characterisation of two different HPAs, the phosphomolybdic acid (PMA) and the phosphotungstic acid (PWA) dissolved in acidic and alkaline environment showed that both heteropolyacids demonstrate a redox activity but they also suffer from low stability issues. A series of gas diffusion electrodes were manufactured having PMA and PWA incorporated in their catalyst layer. The electrode support was carbon Toray paper and each heteropolyacid was mixed with Ni to create the catalyst layer of the electrode. From the electrochemical characterisation oF these electrodes in alkaline electrolyte, it was shown that the addition of HPAs enhances the activity of the nickel towards OER and ORR. During the constant current measurements on the manufactured gas diffusion electrodes it was noticed that the electrodes fail after a period of time which could be attributed to the corrosion of the carbon support. In order to find alternative, non-carbon materials to be used as the electrode support, electrochemical characterisation on Magneli phase bulk materials, Magneli spray coated electrodes and PVD coated electrodes was performed. The results from this investigation showed that Magneli phase materials can support electron transfer reactions but their electron conductivity is rather low and it needs to be enhanced. Additionally, it was presented that the Magneli coating protects the substrate over the potential region where OER and ORR take place. Hence, Magneli materials could be used as a support for the bifunctional HPA gas diffusion electrodes.
184

Defect and fault tolerance techniques for nano-electronics

Melouki, Aissa January 2011 (has links)
Nanotechnology-based devices are believed to be the future possible alternative to CMOS-based devices. It is predicted that the high integration density offered by emerging nanotechnologies will be accompanied by high manufacturing defect rates and high operation-time fault rates. This thesis is concerned with developing defect and fault tolerance techniques to address low manufacturing yield due to permanent defects and reduced computational reliability due to transient faults projected in nanoscale devices and nanometre CMOS circuits. The described research makes four key contributions. The first contribution is a novel defect tolerance technique to improve the manufacturing yield of nanometre CMOS logic circuits. The technique is based on replacing each transistor by an N2-transistor structure (N ≥ 2) that guarantees defect tolerance of all (N−1) defects. The targeted defects include stuck-open, stuck-short and bridging defects. Extensive simulation results using ISCAS benchmark circuits, show that the proposed technique achieves manufacturing yield higher than recently proposed techniques and at a reduced area overhead. The second contribution is two new repair techniques, named Tagged Replacement and Modified Tagged Replacement, to improve the manufacturing yield of nanoscale cross-bars implementing logic circuits as look-up tables (LUTs). The techniques are based on highly efficient repair algorithms that improve yield by increasing the resolution of repair. Simulation results show that the proposed techniques are able to provide higher levels of defect tolerance and have lower redundancy requirements than recently reported techniques. Another popular crossbar-based circuit implementation is nanoscale programmable logic arrays (PLAs). The third contribution is a probabilistic defect tolerance design flow that improves the manufacturing yield of nanoscale PLAs and significantly reduces post-fabrication test and diagnosis time. This is achieved by limiting defect diagnosis to the nanowire level rather than the crosspoint level as in previously proposed graph-based techniques. The final contribution involves improving both manufacturing yield and computational reliability of nanoscale crossbars implementing logic circuits as LUTs. This is achieved by combining Hamming and Bose-Chaudhuri-Hocquenghem (BCH) codes together or with N-Modular Redundancy and Bad Line Exclusion techniques. Simulation results show a significant improvement in fault tolerance by the proposed techniques (targeting fault rates upto 20%) when compared to previously reported single coding schemes
185

Numerical simulations of a transverse sonic jet in a laminar hypersonic flow

Dixon, Favian C. January 2003 (has links)
No description available.
186

An investigation of boundary-driven streaming in acoustofluidic systems for particle and cell manipulation

Lei, Junjun January 2015 (has links)
No description available.
187

Micro-impedance cytometry

Bernabini, Catia January 2010 (has links)
Electrical impedance spectroscopy is a non-invasive and label free technique that allows for rapid counting and characterisation of particles in suspension based on their response to applied AC potentials. In recent years, lab-on-a-chip technologies have been developed to enable single-cell impedance detection and a wide range of impedance-based microfluidic devices have been reported. Despite the number of contributions and the achievements of this field, micro-impedance cytometry still suffers from a lack of sensitivity and specificity compared to traditional flow cytometry, which limits the potential commercialization of microfluidic impedance devices. While impedance measurements of beads and cells are well established, discrimination between particles that are very similar in size or detection of small particles (around 1 μm in diameter) such as bacteria, still represents a difficult task. A number of issues limit the sensitivity and specificity of these microfluidic systems. Primarily, the sensitivity is governed by the dimension of the sample analysis volume. A small volume gives a high sensitivity, but this can lead to practical problems, including fabrication and clogging of the device. In addition, the spatial location of each particle needs to be controlled accurately within the field. Therefore, an efficient and accurate method for focussing the particles in the centre of the electric field is important. In this thesis, a micro-impedance cytometer for the detection of small particles and bacteria and for the discrimination of particles that are very similar in size is presented. The device consists of a microfluidic channel where two pairs of microfabricated electrodes are provided to perform differential measurements of single particles in suspension at high speed. Different electrode configurations and different techniques for focussing the sample within the detection region of the device are investigated in order to improve the sensitivity of the system without reducing the dimensions of the microfluidic channel. Detection at a volume ratio of particle to an estimated sensing volume of 0.007% and discrimination of 1 μm and 2 μm diameter polystyrene beads and E. coli are demonstrated. The micro-impedance cytometer is also proven to be a reliable and effective system to investigate and determine the unknown dielectric properties of particles in suspension, such as polyelectrolyte microcapsules
188

Simulation techniques for the study of the manoeuvring of advanced rotorcraft configurations

Rutherford, Stephen January 1997 (has links)
Inverse simulation is a technique by which the necessary control actions can be calculated for a vehicle to perform a particular manoeuvre. The manoeuvre definition is thus the input to the problem, and the output is a time history of the control motions. The realism of a result is clearly dependent on the fidelity and sophistication of the vehicle mathematical model. Present inverse simulation algorithms are limited by being model specific and only able to accommodate models of restricted complexity. For helicopters specifically the models used in inverse simulation are, in general, rudimentary in nature. The existing inverse simulation algorithm at Glasgow University, "Helinv" is specific to the helicopter model, "HGS". Though HGS is very advanced by comparison with other inverse simulation helicopter models, it lags far behind the state of the art in conventional simulation. The principal aims of this research were therefore twofold: to develop a robust, generic inverse simulation algorithm, "Genisa"; and to develop a state of the art individual blade helicopter rotor model, "Hibrom". Naturally verification and validation were integral to these aims. These objectives having been achieved the intention was to demonstrate the flexibility of Genisa and the value of Hibrom by performing inverse simulations of various rotorcraft configurations. As well as representing a novel tool in rotorcraft simulation, the development of a flexible inverse simulation algorithm which can accommodate complex models extends the boundaries of inverse problems in general. Genisa has proven to be both flexible and robust. Hibrom has been verified, validated and - using Genisa - successfully used in inverse simulation. The advantages of an individual blade model in inverse simulation have been demonstrated by comparing results with the disc model, HGS. Inverse simulations have been performed for various rotorcraft configurations identifying the respective benefits of the different vehicles. In all respects the aims identified above have been met in full.
189

An artificial intelligence framework for experimental design and analysis in discrete event simulation

Taylor, Richard Paul January 1988 (has links)
Simulation studies cycle through the phases of formulation, programming, verification and validation, experimental design and analysis, and implementation. The work presented has been concerned with developing methods to enhance the practice and support for the experimental design and analysis phase of a study. The investigation focussed on the introduction of Artificial Intelligence (AI) techniques to this phase, where previously there existed little support. The reason for this approach was the realisation that the experimentation process in a simulation study can be broken down into a reasoning component and a control of execution component. In most studies, a user would perform both of these. The involvement of a reasoning process attracted the notion of artificial intelligence or at least the prospective use of its techniques. After a study into the current state of the art, work began by considering the development of a support system for experimental design and analysis that had human intelligence and machine control of execution. This provided a semi-structured decision-making environment in the form of a controller that requested human input. The controller was made intelligent when it was linked to a non-procedural (PROLOG) program that provided remote intelligent input from either the user or default heuristics. The intelligent controller was found to enhance simulation experimentation because it ensures that all the steps in the experimental design and analysis phase take place and receive appropriate input. The next stage was to adopt the view that simulation experimental design and analysis may be enhanced through a system that had machine intelligence but expected human control of execution. This provided the framework of an advisor that adopted a consultation expert system paradigm. Users were advised on how to perform simulation experimentation. Default reasoning strategies were implemented to provide the system with advisory capabilities in the tasks of prediction, evaluation, comparison, sensitivity analysis, transient behaviour, functional relations, optimisation. Later the controller and the advisor were linked to provide an integrated system with both machine intelligence and machine control of execution. User involvement in the experimentation process was reduced considerably as support -¿as provided in both the reasoning and control of execution aspects. Additionally, this integrated system supports facilities for refinement purposes that aim at turning the system’s knowledge into expertise. It became theoretically possible for other simulation experts to teach the system or experiment with their own rules and knowledge. The following stage considered making the knowledge of the system available to the user, thereby turning the system into a teacher and providing pedagogical support Teaching was introduced through explanation and demonstration. The explanation facility used a mixed approach: it combined a first time response explanation facility to "how" and "why" questions with a menu driven information system facility for "explain" requests or further queries. The demonstration facility offers tutorials on the use of the system and how to carry out an investigation of any of the tasks that the system can address. The final part of the research was to collect some empirical results about the performance of the system. Some experiments were performed retroactively on existing studies. The system was also linked to a data-driven simulation package 'hat permitted evaluation using some large scale industrial applications. The system’s performance was measured by its ability to perform as well as students with simulation knowledge but not necessarily expertise. The system was also found to assist the user with little or no simulation knowledge to perform as well as students with knowledge. This study represents the first practical attempts to use the expert system framework to model the processes involved in simulation experimentation. The framework described in this thesis has been implemented as a prototype advisory system called WES (Warwick Expert Simulator). The thesis concludes that the framework proposed is robust for this purpose.
190

The development of an information system for drug misuse using self knowledge elicitation

Gupta, Rakesh K. January 1999 (has links)
In the past, information systems have been developed by system analysts and programmers with the involvement of end users being at a minimum. For a long time now, researchers (Lucas 1976, Alter 1996) have been stressing the importance of significant user involvement because it brings a number of beneficial results: involvement can be challenging and intrinsically satisfying; involvement usually results in more commitment to change; the user becomes more knowledgeable about change and so is better trained in the use of the system; a better solution to the problem is obtained because users know more about the present system than analysts; involvement means the user has retained much of the control over operations. The contribution that this thesis makes is the concept of self knowledge elicitation as an approach to prototyping, developing and maintaining information systems. A key feature of this concept is the high degree of user involvement in the design and development process. Self knowledge elicitation allows the user to build an information system using his/her own knowledge and expertise, and then also allows him/her to maintain and update this system. This concept initially emerged following a research project which involved the development of an Expert Advisory System for AIDS/HIV using traditional techniques of development, which were found to have a number of deficiencies including the time-factor. Both formal and informal evaluations of the self knowledge elicitation concept were carried out at 20 different sites throughout Central England, over a minimum period of nine months. The results of these trials indicated that this concept was acceptable and could be used as a practical, cost-effective way of developing and maintaining information systems - especially for the purposes of training and education. Significant technological advances in both hardware and software over recent years (advanced word processors, intemet/intranet, web browsers, e-mail, etc.), used appropriately, will increase the availability, functionality and acceptability of the self knowledge elicitation concept.

Page generated in 0.0454 seconds