• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2770
  • 1633
  • 750
  • 261
  • 232
  • 206
  • 136
  • 96
  • 75
  • 62
  • 58
  • 33
  • 33
  • 33
  • 33
  • Tagged with
  • 8051
  • 1660
  • 1184
  • 1119
  • 845
  • 737
  • 678
  • 494
  • 431
  • 427
  • 415
  • 400
  • 398
  • 386
  • 386
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

EEG pattern classification for the brain-computer musical interface

Duncan, Alexander A. January 2001 (has links)
No description available.
462

Investigations into the effect of heat and mass transfer on flow across tube bundles

Aly, Samir El-Sayed January 1979 (has links)
No description available.
463

Frequency selective grating filters for integrated optics

Yi Yan, Alfredo January 1978 (has links)
No description available.
464

Metrology and characterisation of defects in thin-film barrier layers employed in flexible photovoltaic modules

Elrawemi, Mohamed January 2015 (has links)
Flexible thin-film photovoltaic (PV) modules based on copper indium gallium selenide (CIGS) materials are one of the most recent developments in the renewable energy field, and the latest films have efficiencies at or beyond the level of Si-based rigid PV modules. Whilst these films offer significant advantages in terms of mass and the possibility of building-integrated photovoltaic (BIPV) applications, they are at present highly susceptible to long term environmental degradation as a result of water vapour transmission through the protective encapsulation layer to the active (absorber) layer. To maintain the PV module flexibility and to reduce or eliminate the water vapour permeability, the PV encapsulation includes a barrier layer of amorphous aluminium oxide (Al2O3) material of a few nanometres thickness deposited on a planarised polyethylene naphthalate (PEN) substrate. The highly conformal barrier layer of the Al2O3 is produced by atomic layer deposition (ALD) methods using roll-to-roll (R2R) technology. Nevertheless, water vapour permeation is still facilitated by the presence of micro and nano-scale defects generated during the deposition processes of the barrier material, which results in decreased cell efficiency and reduced unit longevity. The state of the art surface metrology technologies including: optical microscopy, white light scanning interferometry (WLSI), atomic force microscopy (AFM) and scanning electron microscopy (SEM) were extensively deployed in this project as offline surface characterisation methods to characterise the water vapour barrier layer defects, which are postulated to be directly responsible for the water vapour ingress. Areal surface texture parameters analysis based on wolf pruning, area pruning and segmentation analysis methods as defined in ISO 25178-2; allow the efficient separation of small insignificant defects from significant defects. The presence of both large and small defects is then correlated with the barrier films functionality as measured on typical sets of Al2O3 ALD films using a standard MOCON® (quantitative gas permeation) test. The investigation results of the initial analysis finishes by drawing conclusions based on the analysis of the water vapour transmission rate (WVTR), defects size, density and distribution, where it is confirmed that small numbers of large defects have more influence on the deterioration of the barrier films functionality than large numbers of small defects. This result was then used to provide the basis for developing a roll-to-roll in process metrology device for quality control of flexible PV barrier films. Furthermore, a theoretical model approach was developed in this thesis based on the water vapour diffusion theory to determine the cut- off level between large significant defects and small insignificant defects. The results of the model would seem to reveal that, in order to build up in process, non-contact optical defect detection system for R2R barrier films, the critical spatial resolution required for defect detection need not be less than 3 μm laterally and 3Sq nm (Sq= root mean square surface roughness deviation of non-defective sample area) per field of view (FOV) vertically. Any defect that has dimensions less than this appears to have a significantly lower effect on the PV barrier properties and functionality. In this study, the surface topography analysis results and the theoretical model approach outcomes, both provide the basis for developing a R2R in process metrology device for PV barrier films defect detection. Eventually, the work in this thesis reports on the deployment of new (novel) in-line interferometric optical sensors based on wavelength scanning interferometry (WSI) designed to measure and catalogue the PV barrier films defects where they are present. The sensors have built-in environmental vibration compensation and are being deployed on a demonstrator system at a R2R production facility in the UK.
465

The application of ANN and ANFIS prediction models for thermal error compensation on CNC machine tools

Abdulshahed, Ali January 2015 (has links)
Thermal errors can have significant effects on Computer Numerical Control (CNC) machine tool accuracy. The errors come from thermal deformations of the machine elements caused by heat sources within the machine structure or from ambient temperature change. The effect of temperature can be reduced by error avoidance or numerical compensation. The performance of a thermal error compensation system essentially depends upon the accuracy and robustness of the thermal error model and its input measurements. This thesis first reviews different methods of designing thermal error models, before concentrating on employing Artificial Intelligence (AI) methods to design different thermal prediction models. In this research work the Adaptive Neuro-Fuzzy Inference System (ANFIS) is used as the backbone for thermal error modelling. The choice of inputs to the thermal model is a non-trivial decision which is ultimately a compromise between the ability to obtain data that sufficiently correlates with the thermal distortion and the cost of implementation of the necessary feedback sensors. In this thesis, temperature measurement was supplemented by direct distortion measurement at accessible locations. The location of temperature measurement must also provide a representative measurement of the change in temperature that will affect the machine structure. The number of sensors and their locations are not always intuitive and the time required to identify the optimal locations is often prohibitive, resulting in compromise and poor results. In this thesis, a new intelligent system for reducing thermal errors of machine tools using data obtained from thermography data is introduced. Different groups of key temperature points on a machine can be identified from thermal images using a novel schema based on a Grey system theory and Fuzzy C-Means (FCM) clustering method. This novel method simplifies the modelling process, enhances the accuracy of the system and reduces the overall number of inputs to the model, since otherwise a much larger number of thermal sensors would be required to cover the entire structure. An Adaptive Neuro-Fuzzy Inference System with Fuzzy C-Means clustering (ANFIS-FCM) is then employed to design the thermal prediction model. In order to optimise the approach, a parametric study is carried out by changing the number of inputs and number of Membership Functions (MFs) to the ANFIS-FCM model, and comparing the relative robustness of the designs. The proposed approach has been validated on three different machine tools under different operation conditions. Thus the proposed system has been shown to be robust to different internal heat sources, ambient changes and is easily extensible to other CNC machine tools. Finally, the proposed method is shown to compare favourably against alternative approaches such as an Artificial Neural Network (ANN) model and different Grey models.
466

The utilisation of information available in a sensorless control system of an AC induction motor for condition monitoring

Abu Saad, Samieh January 2015 (has links)
Induction motor driven mechanical transmission systems are widely utilised in many applications across numerous sectors including industry, power generation and transportation. They are however subject to common failure modes primarily associated with faults in the driven mechanical components. Notably, gearboxes, couplings and bearings can cause significant defects in both the electrical and mechanical systems. Condition monitoring (CM) undertakes a key role in the detection of potential defects in the early development stages and in turn avoiding catastrophic operational and financial consequences caused by unplanned breakdowns. Meanwhile, variable speed drives (VSDs) have been increasingly deployed in recent years to achieve accurate speed control and higher operational efficiency. Among the different speed control designs, sensorless VSDs deliver improved dynamic performance and obviate speed measurement devices. This solution however results in heightened noise levels and continual changes in the power supply parameters that potentially impede the detection of minute fault features. This study addresses the gap identified through a systematic review of the literature on the monitoring of mechanical systems utilise induction motors (IM) with sensorless VSDs. Specifically, existing techniques prove ineffective for common mechanical faults that develop in gearboxes and friction induced scenarios. The primary aim of this research centres on the development of a more effective and accurate diagnostic solution for VSD systems using the data available in a VSD. An experimental research approach is based to model and simulate VSD systems under different fault conditions and gather in-depth data on changes in electrical supply parameters: current, voltage and power. Corresponding techniques including model based methods and dynamic signature analysis methods were developed for extracting the changes from noise measurements. An observer based detection technique is developed based on speed and flux observers that are deployed to generate power residuals. Both static and dynamic techniques are incorporated for the first time in order to detect the mechanical misalignment and lubrication degradation, each with different degrees of severities. The results of this study demonstrate that observer based approaches utilising power residual signalling can be effective in the identification of different faults in the monitoring of sensorless VSD driven mechanical systems. Specifically, the combination between dynamic and static components of the power supply parameters and control data has proved effective in separating the four types of common faults: shaft misalignment, lubricant shortage, viscosity changes and water contamination. The static data based approach outperforms the dynamic data based approach in detecting shaft misalignments under sensorless operating modes. The dynamic components of power signals, however, records superior results in the detection of different oil degradation problems. Nevertheless, static components of torque related variables, power and voltage can be used jointly in separating the three tested lubricant faults.
467

Application of product data management within the product development process

Hooi Leng, Lee January 2002 (has links)
Manufacturing companies need to be able to respond to customer demand quickly and accurately. This requires the capability to manage product data effectively. Product Data Management (PDM) systems have been identified as a solution to deliver this capability by providing the right information to the right people at the right time and in the right format. The foundation of this research is that the concept of PDM is relevant and important within the product development process. This research focuses upon how the PDM concept is applied in practice to define and configure products and how it can be integrated with other major information systems to enable an enterprise wide information system. To enable the research aim, an extensive review of literature was undertaken to investigate the effectiveness of PDM in enhancing the product definition process and in creating an interface between different business functional areas. A survey ofPDM system usage was undertaken aimed at identifying the current level of PDM usage within manufacturing enterprises in the UK. This was followed up by three industrial case studies to provide some degrees of validation of the results obtained. A need for effective one time order capture was identified from the three case studies which led to the development of a model specification for a late product configuration tool. A prototype system was produced to validate the design specification and was successfully demonstrated to a collaborating company. During the submission of this thesis, the collaborating company and the university are working on funding a project to pursue with its implementation. The work undertaken has firmly established the relevance ofPDM within the product development process and the importance of effective interfaces between PDM and other manufacturing information systems. The research will be of interest to small and medium sized manufacturing companies searching solutions for improving the management of their product data to enhance product definition and configuration.
468

Pulse position modulation coding schemes for optical inter-satellite links in free space

Ghosna, Fadi Jawdat January 2010 (has links)
The rapid and significant development of communications links between satellites has made it possible to use various applications such as relay voice, video, multimedia, etc. As a result, a great deal of research has been done in this field during the last few years to reduce power consumption and increase transmission reliability. This thesis is concerned with an analysis of intersatellite links in free space, with optical links using laser sources being considered in particular. It includes a literature survey and a thorough theoretical investigation into designing the model of the link in free space. This thesis describes the novel technique of designing the optical receiver that consists of PIN photodiode as a photodetector, Semiconductor optical amplifier (SOA) and a 3rd order Butterworth filter with central decision detection. In addition, it discusses the use of several different coding schemes for use in such links: multiple pulse position modulation (MPPM); digital pulse position modulation (DPPM); Dicode pulse position modulation (Dicode PPM). This novel technique of an optical receiver is investigated and new work is presented in order to examine the noise performance of this optical receiver and hence determine its sensitivity and the number of photons received for a specified error rate. Further new work is carried out to compare these coding schemes in terms of error weightings and coding efficiency through showing how the PCM error rate is affected by false alarm and erasure errors for MPPM, DPPM and Dicode PPM coding 3, 4, 5 and 6 bits of PCM. An original maximum likelihood sequence detector (MLSD) is presented in this thesis in order to perform these comparisons. In addition, computer simulations models (using MCAD) are performed to compare these three coding schemes operating with 3, 4, 5 and 6 bits of PCM in terms of sensitivity and bandwidth efficiency. These comparisons show that MPPM coding 3, 4, 5 and 6 bits of PCM is the appropriate coding scheme to be used in optical inter-satellite links in free space and PCM data rates of 1 Gbit/s.
469

An intelligent robust mouldable scheduler for HPC & elastic environments

Kureshi, Ibad January 2016 (has links)
Traditional scheduling techniques are of a by-gone era and do not cater for the dynamism of new and emerging computing paradigms. Budget constraints now push researchers to migrate their workloads to public clouds or to buy into shared computing services as funding for large capital expenditures are few and far between. The sites still hosting large or shared computing infrastructure have to ensure that the system utilisation and efficiency is as high as ossible. However, the efficiency can not come at the cost of quality of service as the availability of public clouds now means that users can move away. This thesis presents a novel scheduling system to improve job turn-around-time. The Robust Mouldable Scheduler outlined in these pages utilises real application benchmarks to profile system performance and predict job execution times at different allocations, something no other scheduler does at present. The system is able to make an allocation decisions ensuring the jobs can fit into spaces available on the system using fewer resources without delaying the job completion time. The results demonstrate significant improvement in workload turn-around-times using real High Performance Computing (HPC) trace logs. Utilising three years of the University of Huddersfield trace logs the mouldable scheduler consistently simulated faster workload completion. Further, the results establish that by not relying on the user to suggest resource allocations for jobs the system is able to mitigate bad-put into the system leading to improved efficiency. A thorough investigation of Research Computing Systems (RCS), workload management systems, scheduling algorithms and strategies, benchmarking and profiling toolkits, and simulators is presented to establish the state of the art. Within this thesis a method to profile applications and workloads that leverages common open-source tools on HPC systems is presented. The resultant toolkit is used to profile the University of Huddersfield workload. This workload forms the basis to evaluate the mouldable scheduler. The research includes advance computing paradigms such as utilising Artificial Intelligence methods to improve the efficiency of the scheduler, or Surge Computing, where workloads are scaled beyond institutional firewalls through elastic compute systems.
470

Semi-supervised image classification based on a multi-feature image query language

Pein, Raoul Pascal January 2010 (has links)
The area of Content-Based Image Retrieval (CBIR) deals with a wide range of research disciplines. Being closely related to text retrieval and pattern recognition, the probably most serious issue to be solved is the so-called \semantic gap". Except for very restricted use-cases, machines are not able to recognize the semantic content of digital images as well as humans. This thesis identifies the requirements for a crucial part of CBIR user interfaces, a multimedia-enabled query language. Such a language must be able to capture the user's intentions and translate them into a machine-understandable format. An approach to tackle this translation problem is to express high-level semantics by merging low-level image features. Two related methods are improved for either fast (retrieval) or accurate(categorization) merging. A query language has previously been developed by the author of this thesis. It allows the formation of nested Boolean queries. Each query term may be text- or content-based and the system merges them into a single result set. The language is extensible by arbitrary new feature vector plug-ins and thus use-case independent. This query language should be capable of mapping semantics to features by applying machine learning techniques; this capability is explored. A supervised learning algorithm based on decision trees is used to build category descriptors from a training set. Each resulting \query descriptor" is a feature-based description of a concept which is comprehensible and modifiable. These descriptors could be used as a normal query and return a result set with a high CBIR based precision/recall of the desired category. Additionally, a method for normalizing the similarity profiles of feature vectors has been developed which is essential to perform categorization tasks. To prove the capabilities of such queries, the outcome of a semi-supervised training session with \leave-one-object-out" cross validation is compared to a reference system. Recent work indicates that the discriminative power of the query-based descriptors is similar and is likely to be improved further by implementing more recent feature vectors.

Page generated in 0.023 seconds