• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2767
  • 1633
  • 749
  • 261
  • 231
  • 206
  • 136
  • 96
  • 75
  • 62
  • 57
  • 33
  • 33
  • 33
  • 33
  • Tagged with
  • 8045
  • 1659
  • 1180
  • 1117
  • 845
  • 737
  • 678
  • 494
  • 431
  • 427
  • 415
  • 399
  • 397
  • 386
  • 386
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

Defect and fault tolerance techniques for nano-electronics

Melouki, Aissa January 2011 (has links)
Nanotechnology-based devices are believed to be the future possible alternative to CMOS-based devices. It is predicted that the high integration density offered by emerging nanotechnologies will be accompanied by high manufacturing defect rates and high operation-time fault rates. This thesis is concerned with developing defect and fault tolerance techniques to address low manufacturing yield due to permanent defects and reduced computational reliability due to transient faults projected in nanoscale devices and nanometre CMOS circuits. The described research makes four key contributions. The first contribution is a novel defect tolerance technique to improve the manufacturing yield of nanometre CMOS logic circuits. The technique is based on replacing each transistor by an N2-transistor structure (N ≥ 2) that guarantees defect tolerance of all (N−1) defects. The targeted defects include stuck-open, stuck-short and bridging defects. Extensive simulation results using ISCAS benchmark circuits, show that the proposed technique achieves manufacturing yield higher than recently proposed techniques and at a reduced area overhead. The second contribution is two new repair techniques, named Tagged Replacement and Modified Tagged Replacement, to improve the manufacturing yield of nanoscale cross-bars implementing logic circuits as look-up tables (LUTs). The techniques are based on highly efficient repair algorithms that improve yield by increasing the resolution of repair. Simulation results show that the proposed techniques are able to provide higher levels of defect tolerance and have lower redundancy requirements than recently reported techniques. Another popular crossbar-based circuit implementation is nanoscale programmable logic arrays (PLAs). The third contribution is a probabilistic defect tolerance design flow that improves the manufacturing yield of nanoscale PLAs and significantly reduces post-fabrication test and diagnosis time. This is achieved by limiting defect diagnosis to the nanowire level rather than the crosspoint level as in previously proposed graph-based techniques. The final contribution involves improving both manufacturing yield and computational reliability of nanoscale crossbars implementing logic circuits as LUTs. This is achieved by combining Hamming and Bose-Chaudhuri-Hocquenghem (BCH) codes together or with N-Modular Redundancy and Bad Line Exclusion techniques. Simulation results show a significant improvement in fault tolerance by the proposed techniques (targeting fault rates upto 20%) when compared to previously reported single coding schemes
422

Numerical simulations of a transverse sonic jet in a laminar hypersonic flow

Dixon, Favian C. January 2003 (has links)
No description available.
423

An investigation of boundary-driven streaming in acoustofluidic systems for particle and cell manipulation

Lei, Junjun January 2015 (has links)
No description available.
424

Micro-impedance cytometry

Bernabini, Catia January 2010 (has links)
Electrical impedance spectroscopy is a non-invasive and label free technique that allows for rapid counting and characterisation of particles in suspension based on their response to applied AC potentials. In recent years, lab-on-a-chip technologies have been developed to enable single-cell impedance detection and a wide range of impedance-based microfluidic devices have been reported. Despite the number of contributions and the achievements of this field, micro-impedance cytometry still suffers from a lack of sensitivity and specificity compared to traditional flow cytometry, which limits the potential commercialization of microfluidic impedance devices. While impedance measurements of beads and cells are well established, discrimination between particles that are very similar in size or detection of small particles (around 1 μm in diameter) such as bacteria, still represents a difficult task. A number of issues limit the sensitivity and specificity of these microfluidic systems. Primarily, the sensitivity is governed by the dimension of the sample analysis volume. A small volume gives a high sensitivity, but this can lead to practical problems, including fabrication and clogging of the device. In addition, the spatial location of each particle needs to be controlled accurately within the field. Therefore, an efficient and accurate method for focussing the particles in the centre of the electric field is important. In this thesis, a micro-impedance cytometer for the detection of small particles and bacteria and for the discrimination of particles that are very similar in size is presented. The device consists of a microfluidic channel where two pairs of microfabricated electrodes are provided to perform differential measurements of single particles in suspension at high speed. Different electrode configurations and different techniques for focussing the sample within the detection region of the device are investigated in order to improve the sensitivity of the system without reducing the dimensions of the microfluidic channel. Detection at a volume ratio of particle to an estimated sensing volume of 0.007% and discrimination of 1 μm and 2 μm diameter polystyrene beads and E. coli are demonstrated. The micro-impedance cytometer is also proven to be a reliable and effective system to investigate and determine the unknown dielectric properties of particles in suspension, such as polyelectrolyte microcapsules
425

Simulation techniques for the study of the manoeuvring of advanced rotorcraft configurations

Rutherford, Stephen January 1997 (has links)
Inverse simulation is a technique by which the necessary control actions can be calculated for a vehicle to perform a particular manoeuvre. The manoeuvre definition is thus the input to the problem, and the output is a time history of the control motions. The realism of a result is clearly dependent on the fidelity and sophistication of the vehicle mathematical model. Present inverse simulation algorithms are limited by being model specific and only able to accommodate models of restricted complexity. For helicopters specifically the models used in inverse simulation are, in general, rudimentary in nature. The existing inverse simulation algorithm at Glasgow University, "Helinv" is specific to the helicopter model, "HGS". Though HGS is very advanced by comparison with other inverse simulation helicopter models, it lags far behind the state of the art in conventional simulation. The principal aims of this research were therefore twofold: to develop a robust, generic inverse simulation algorithm, "Genisa"; and to develop a state of the art individual blade helicopter rotor model, "Hibrom". Naturally verification and validation were integral to these aims. These objectives having been achieved the intention was to demonstrate the flexibility of Genisa and the value of Hibrom by performing inverse simulations of various rotorcraft configurations. As well as representing a novel tool in rotorcraft simulation, the development of a flexible inverse simulation algorithm which can accommodate complex models extends the boundaries of inverse problems in general. Genisa has proven to be both flexible and robust. Hibrom has been verified, validated and - using Genisa - successfully used in inverse simulation. The advantages of an individual blade model in inverse simulation have been demonstrated by comparing results with the disc model, HGS. Inverse simulations have been performed for various rotorcraft configurations identifying the respective benefits of the different vehicles. In all respects the aims identified above have been met in full.
426

An artificial intelligence framework for experimental design and analysis in discrete event simulation

Taylor, Richard Paul January 1988 (has links)
Simulation studies cycle through the phases of formulation, programming, verification and validation, experimental design and analysis, and implementation. The work presented has been concerned with developing methods to enhance the practice and support for the experimental design and analysis phase of a study. The investigation focussed on the introduction of Artificial Intelligence (AI) techniques to this phase, where previously there existed little support. The reason for this approach was the realisation that the experimentation process in a simulation study can be broken down into a reasoning component and a control of execution component. In most studies, a user would perform both of these. The involvement of a reasoning process attracted the notion of artificial intelligence or at least the prospective use of its techniques. After a study into the current state of the art, work began by considering the development of a support system for experimental design and analysis that had human intelligence and machine control of execution. This provided a semi-structured decision-making environment in the form of a controller that requested human input. The controller was made intelligent when it was linked to a non-procedural (PROLOG) program that provided remote intelligent input from either the user or default heuristics. The intelligent controller was found to enhance simulation experimentation because it ensures that all the steps in the experimental design and analysis phase take place and receive appropriate input. The next stage was to adopt the view that simulation experimental design and analysis may be enhanced through a system that had machine intelligence but expected human control of execution. This provided the framework of an advisor that adopted a consultation expert system paradigm. Users were advised on how to perform simulation experimentation. Default reasoning strategies were implemented to provide the system with advisory capabilities in the tasks of prediction, evaluation, comparison, sensitivity analysis, transient behaviour, functional relations, optimisation. Later the controller and the advisor were linked to provide an integrated system with both machine intelligence and machine control of execution. User involvement in the experimentation process was reduced considerably as support -¿as provided in both the reasoning and control of execution aspects. Additionally, this integrated system supports facilities for refinement purposes that aim at turning the system’s knowledge into expertise. It became theoretically possible for other simulation experts to teach the system or experiment with their own rules and knowledge. The following stage considered making the knowledge of the system available to the user, thereby turning the system into a teacher and providing pedagogical support Teaching was introduced through explanation and demonstration. The explanation facility used a mixed approach: it combined a first time response explanation facility to "how" and "why" questions with a menu driven information system facility for "explain" requests or further queries. The demonstration facility offers tutorials on the use of the system and how to carry out an investigation of any of the tasks that the system can address. The final part of the research was to collect some empirical results about the performance of the system. Some experiments were performed retroactively on existing studies. The system was also linked to a data-driven simulation package 'hat permitted evaluation using some large scale industrial applications. The system’s performance was measured by its ability to perform as well as students with simulation knowledge but not necessarily expertise. The system was also found to assist the user with little or no simulation knowledge to perform as well as students with knowledge. This study represents the first practical attempts to use the expert system framework to model the processes involved in simulation experimentation. The framework described in this thesis has been implemented as a prototype advisory system called WES (Warwick Expert Simulator). The thesis concludes that the framework proposed is robust for this purpose.
427

The development of an information system for drug misuse using self knowledge elicitation

Gupta, Rakesh K. January 1999 (has links)
In the past, information systems have been developed by system analysts and programmers with the involvement of end users being at a minimum. For a long time now, researchers (Lucas 1976, Alter 1996) have been stressing the importance of significant user involvement because it brings a number of beneficial results: involvement can be challenging and intrinsically satisfying; involvement usually results in more commitment to change; the user becomes more knowledgeable about change and so is better trained in the use of the system; a better solution to the problem is obtained because users know more about the present system than analysts; involvement means the user has retained much of the control over operations. The contribution that this thesis makes is the concept of self knowledge elicitation as an approach to prototyping, developing and maintaining information systems. A key feature of this concept is the high degree of user involvement in the design and development process. Self knowledge elicitation allows the user to build an information system using his/her own knowledge and expertise, and then also allows him/her to maintain and update this system. This concept initially emerged following a research project which involved the development of an Expert Advisory System for AIDS/HIV using traditional techniques of development, which were found to have a number of deficiencies including the time-factor. Both formal and informal evaluations of the self knowledge elicitation concept were carried out at 20 different sites throughout Central England, over a minimum period of nine months. The results of these trials indicated that this concept was acceptable and could be used as a practical, cost-effective way of developing and maintaining information systems - especially for the purposes of training and education. Significant technological advances in both hardware and software over recent years (advanced word processors, intemet/intranet, web browsers, e-mail, etc.), used appropriately, will increase the availability, functionality and acceptability of the self knowledge elicitation concept.
428

Design of low-cost smart antennas for wireless communications

Gu, Chao January 2017 (has links)
Traditional smart antennas are complicated, bulky, power hungry and expensive, as they require a large number of radio frequency (RF)/microwave phase shifters, and trans-mit/receive (T/R) modules. For wide applications in civilian wireless communications, it is important to investigate novel designs of electronically beam-steerable smart antennas which feature compact size, low power, and low cost. This dissertation presents novel designs and implementation of low-cost smart anten-nas for wireless communications. Four different designs of low-cost smart antennas have been presented, and these smart antennas can be categorized into two different types: The first type is electronically beam-switching antenna based on the concept of electrically steerable parasitic array radiator (ESPAR). The design utilizes the strong mutual coupling between the driven element and reconfigurable parasitic elements to electronically steer the beams. A polarization-reconfigurable square patch is employed as the driven element which is surrounded by reconfigurable parasitic dipoles. The antenna does not require any micro-wave phase shifters and is shown to be able to achieve electronic beam switching and polarization reconfigurability by electronically controlling the PIN diodes. The second type is an electronically beam-switching antenna using active frequency selective surfaces (FSS). Omnidirectional feeders are employed to illuminate reconfigurable FSS cylinders which consist of a number of unit cells loaded by PIN diode or varactors. By controlling the DC bias of individual columns of the FSS cylinder, directive beams can be swept across the entire azimuth plane. Based on different active FSS unit cells, three different low-cost smart antennas have been designed, including a dual-band electronically beam-switching antenna, a 3-D beam scanning antenna, and an electronically beam-switching antenna with continuous frequency tuning. In this thesis, in order to evaluate the antenna performance, comprehensive full-wave electromagnetic (EM) simulations are carried out using commercial software. Furthermore, prototypes are fabricated and tested to validate the design concepts. Good agreement between the simulation and measurement results is achieved, and demonstrates that the smart antennas designed in this thesis have advantages of low cost and low power, thus rendering them promising for applications in wireless communications.
429

The anarchist cinema

Newton, James January 2016 (has links)
There has been only a minimal amount written in academic circles on the connections between political anarchism and cinema. Alan Lovell focuses on allegorical readings of films by Jean Vigo, Luis Bunuel, and Georges Franju. Richard Porton examines the historical representation of anarchists and their ideas. More recently, Nathan Jun lays out ideas for a proposed ‘cinema of liberation’. Yet these three writers, who provide the most notable attempts at wrestling with the subject, barely refer to one another. This means that there are disconnections in the areas of existing scholarly research, and it fails to fully analyse the complex series of relationships that exist between anarchism and film. My thesis attempts to address these gaps, and suggests ways in which anarchist theory can be used as a framework to inform our understanding of cinema as a cultural and industrial institution, and also provide an alternative process of reading and interpreting films. In analysing the dynamics between anarchist theory and film, it focuses on three key areas. Firstly, it considers the notion that cinema is an inherently anarchic space, based around fears of unruly (predominantly working class) audiences. Secondly, it attempts to delineate what the criteria for an anarchist film could be, by looking at a range of formal characteristics and content featured in a number of popular movies. And thirdly, it examines the place of grassroots and DIY filmmaking in the wider context of an anarchist cinema. My thesis finds the continuities that exist between radical film culture of the present and the past, and I propose that there is an innately anarchic undercurrent to several key aspects of cinematic culture. The thesis concludes by stressing the distinction that exists between film as a text, and cinema as a range of cultural activities. I propose that the ultimate embodiment of a study of an anarchist cinema should combine film analysis with that of an examination of cinema as a social and physical space. In turn, this can help us to consider the ways in which film and cinema may form part of a culture of resistance – one which fully articulates the concerns and questions surrounding anarchist political theory.
430

Plasmonic enhanced pyroelectrics for microfluidic manipulation

Esan, Olurotimi January 2017 (has links)
Plasmon enhanced micromanipulation addresses some of the drawbacks associated with more traditional optical based methods, particularly in regard to the nature of laser excitation required for actuation. The resonant electromagnetic field enhancement observed as a result of the plasmon resonance phenomenon, enables trapping of nanoscale objects, and reduces the risk of photoinduced sample damage by reducing excitation power required for trapping. Plasmon resonance introduces an unavoidable heating effect which hinders stable trapping in microfluidic environments as a result of phenomena such as convection. In this work, the heating associated with plasmon resonance is used constructively, to devise a new micromanipulation technique. Plasmonic nanostructures are patterned on pyroelectric substrates which create an electric field in response to changes in temperature. This electric field results in the generation of local and global electrokinetic phenomena which are used in high throughput trapping of suspended particles. To demonstrate the versatility of this technique, particles are patterned into arbitrary shapes. A suggested application for this technique is as an optically controlled photoresist free lithographic method for use in microfluidic environments.

Page generated in 0.0497 seconds