• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
631

VLSI compatible parallel fabrication and characterisation of down-scaled multi-configuration silicon quantum dot devices

Lin, Y. P. January 2014 (has links)
Electron spins in semiconductor quantum dots (QDs) have been increasingly shown in recent years to be a promising platform for realising the qubit – the basic unit of information in quantum computing. A crucial advantage of silicon QDs over alternative platforms is the potential for scalability in a quantum system to contain large numbers of qubits. Electron spins in Si-based QDs also have the benefit of a much longer spin coherence time relative to their extensively researched GaAs based counter parts – a prerequisite which gives the essential time needed for successful quantum gate operations and quantum computations. In this work, we propose and realise the first very large scale integration (VLSI) compatible process capable of fabricating scalable repeatable QD systems in parallel using silicon on insulator (SOI) technology. 3D finite element method (FEM) capacitance and single electron circuit simulations are first utilised to demonstrate the suitability of our double quantum dot (DQD) design dimensions in supporting single electron operation and detection. Here, we also present a new method of detection for single electron turnstile operations which makes use of the periodicity present in the charge stability diagram of a DQD. Through process optimisation, we fabricate 144 high density lithographically defined Si DQDs for the first time in parallel with 80% of the fabricated devices having dimensional variations of less than 5 nm. The novel use of hydrogen silsesquioxane (HSQ) resist with electron beam lithography (EBL) enabled the realisation of lithographically defined reproducible QD dimensions of an average of 51 nm with a standard deviation of 3.4 nm. Combined with an optimised thermal oxidation process, we demonstrate the precise fabrication of different QDs ranging from just 10.6 nm to over 20 nm. These are the smallest lithographically defined high density intrinsic SOI based QDs achieved to date. In addition, we demonstrate the flexibility of our fabrication process in its ability to realise a wide variety of complex device designs repeatedly. A key advantage of our process is its ability to support the scalable fabrication of QD devices without significantly affecting fabrication turnover time. Repeatable characteristic QD Coulomb oscillations and Coulomb diamonds signifying single electron tunnelling through our system are observed in electrical characteristics. Here we achieve precise independent simultaneous control of different QD’s single electron occupation as well as demonstrate evidence suggesting charge detection between QD channels. The unmatched level of clarity observed within Coulomb blockade diamond characteristics at 4.2K enables observations of line splitting of our QD’s excited states at this temperature, and readout of the spin orientation of sequential single electrons filling the QD. Through this spin readout, we gained an idea of the number of electrons stored on the QD and in turn, our ability to control the QD with precision down to the single electron limit. Statistically, we realise a parallel fabrication yield of 69% of devices demonstrating the ability to switch on and off repeatedly at 4K cryogenic temperatures with no leakage and sufficient channel resistances for single electron turnstile operations. This is the highest achieved yield observed to date for fabrication of intrinsic SOI based QD systems.
632

Semantic linking and personalization in context

Sah, Melike January 2009 (has links)
The World Wide Web (WWW) is intended for humans to create and share documents. However, it does not support machine-processable data and automated processing. The Semantic Web is an extension to the WWW and can overcome its shortcomings. The Semantic Web provides the technology for creating and sharing data in machine-processable semantics. As a result, the data can be used and shared in effective ways between cross applications. In this thesis, we investigate the Semantic Web technologies for context-based hyperlink creation and personalization. Two different contributions are presented using Semantic Web technologies. First, we introduce and implement a novel personalized Semantic Web-enabled portal (known as a semantic portal), which is called SEMPort with the aim of improving information discovery and information sharing using the Semantic Web technologies. We also provide different Adaptive Hypermedia (AH) methods using ontology-based user models. In our second contribution, we introduce and implement a novel personalized Semantic Web browser, called SemWeB which is a browser that augments Web documents with metadata. It creates and personalizes context-based hyperlinks and data using ontologies. We have also developed a new behaviour-based user model for Web-based personalization which supports different AH methods. In addition, a novel semantic relatedness measure is proposed. The evaluations showed that our contributions to the development of hypertext systems using Semantic Web technologies are successfully applied for context-based link creation and personalization.
633

An investigation into the structure and properties of polyethylene oxide nanocomposites

Reading, Martin January 2010 (has links)
Polymer nanocomposites have attracted great interest over many years, because of the enhanced properties exhibited by such systems. However, it is only recently that the electrical characteristics of this class of material have begun to be studied in detail. Whenever fillers are added to a host polymer matrix, dispersion is of critical importance since, while a well dispersed nanophase may be beneficial, poor dispersion can have negative consequences. Hence, for the nanocomposites to be used appropriately and provide the best properties, a method for observing the dispersion within the matrix is useful. Despite this, evaluating the dispersion of nano-additives in the bulk is far from straight forward using conventional solid-state materials characterization techniques. This study set out to consider the influence of nano-additives on the physical, thermal and electrical properties of poly(ethylene oxide) systems. The initial objective is to investigate the extent to which dispersion of nanofillers and effect of host molecular weight can be inferred from rheological analysis. This investigation covers many systems based upon polyethylene oxide (PEO); PEO blends, thermally aged PEO and PEO composites with montmorillonite (MMT), micro/nano silicon dioxide (SD/nSD) and boehmite fillers (BO). The study continued from dispersion and solution characterisation onto thermal and electrical properties. The effects of additives and treatment on the crystallisation kinetics and thermal transitions are considered. Polymers are most well known for their electrically insulating properties, therefore electrical analysis into AC breakdown and dielectric spectroscopy were also performed. The research has shown that rheology is capable of producing well dispersed PEO nanocomposites. Addition of fillers during the rheology phase produced the expected monotonic increase in viscosity apart from boehmite, which formed a very viscous gel after reaching a threshold loading. Large drops in thermal transitions were observed for the composite samples. All fillers caused a large increase in breakdown strength at higher loadings, except boehmite which caused the breakdown strength to decrease,an effect discussed in detail.
634

Methodology of refinement and decomposition in UML-B

Said, Mar Yah January 2010 (has links)
UML-B is a UML-like graphical front end for Event-B that provides support for object- oriented modelling concepts. In particular, UML-B supports class diagrams and state machines, concepts that are not explicitly supported in plain Event-B. In Event-B, refinement is used to relate system models at different abstraction levels. The same abstraction-refinement concepts can also be applied in UML-B. This work introduces the notions of refined classes, refined state machines and extended classtypes to en- able refinement of classes and state machines in UML-B. This work makes explicit the structures of class and state machine refinement in UML-B. This work also introduces seven refinement techniques which are, adding new attributes and associations, adding new classes, elaborating state, elaborating transition, moving a class event (or a state machine transition), adding new attributes and associations, and adding new classtypes. In Event-B, decomposition is used to decompose a system into components. The same decomposition concepts can be applied in UML-B. This work introduces the techniques of flattening state machines and state grouping to facilitate a decomposition of a UML-B machine. This work also introduces the notion of composed machine which composes the component machines. The composed machine refines a machine which is being decomposed. The composed machine is used to ensure the composition of the component machines is a valid refinement. Together with the composed UML-B machine, the notions of included machine, composed event and constituent event are introduced. The UML-B drawing tool and Event-B translator are extended to support the new refinement and decomposition concepts. A case study of an auto teller machine (ATM) is presented to validate the extensions of UML-B with regards to the above notions. The ATM case study also demonstrates the above techniques introduced in refinement and decomposition. In addition, this work provides guidelines for performing refinement and decomposition in UML-B and presents a number of generic invariants that may be used when refining a middleware. The middleware is a component via which a requesting component such as an ATM and a responding component such as bank interact in a distributed system.
635

Circuit-level modelling and simulation of carbon nanotube devices

Zhou, Dafeng January 2010 (has links)
The growing academic interest in carbon nanotubes (CNTs) as a promising novel class of electronic materials has led to significant progress in the understanding of CNT physics including ballistic and non-ballistic electron transport characteristics. Together with the increasing amount of theoretical analysis and experimental studies into the properties of CNT transistors, the need for corresponding modelling techniques has also grown rapidly. This research is focused on the electron transport characteristics of CNT transistors, with the aim to develop efficient techniques to model and simulate CNT devices for logic circuit analysis. The contributions of this research can be summarised as follows. Firstly, to accelerate the evaluation of the equations that model a CNT transistor, while maintaining high modelling accuracy, three efficient numerical techniques based on piece-wise linear, quadratic polynomial and cubic spline approximation have been developed. The numerical approximation simplifies the solution of the CNT transistor’s self-consistent voltage such that the calculation of the drain-source current is accelerated by at least two orders of magnitude. The numerical approach eliminates complicated calculations in the modelling process and facilitates the development of fast and efficient CNT transistor models for circuit simulation. Secondly, non-ballistic CNT transistors have been considered, and extended circuit-level models which can capture both ballistic and non-ballistic electron transport phenomena, including elastic scattering, phonon scattering, strain and tunnelling effects, have been developed. A salient feature of the developed models is their ability to incorporate both ballistic and non-ballistic transport mechanisms without a significant computational cost. The developed models have been extensively validated against reported transport theories of CNT transistors and experimental results. Thirdly, the proposed carbon nanotube transistor models have been implemented on several platforms. The underlying algorithms have been developed and tested in MATLAB, behaviourallevel models in VHDL-AMS, and improved circuit-level models have been implemented in two versions of the SPICE simulator. As the final contribution of this work, parameter variation analysis has been carried out in SPICE3 to study the performance of the proposed circuit-level CNT transistor models in logic circuit analysis. Typical circuits, including inverters and adders, have been analysed to determine the dependence of the circuit’s correct operation on CNT parameter variation.
636

Electrodynamic droplet actuation for lab on a chip system

Aghdaei, Sara January 2011 (has links)
This work presents the development of electrowetting on dielectric and liquid dielectrophoresis as a platform for chemistry, biochemistry and biophysics. These techniques, typically performed on a single planar surface offer flexibility for interfacing with liquid handling instruments and performing biological experimentation with easy access for visualisation. Technology for manipulating and mixing small volumes of liquid in microfluidic devices is also crucially important in chemical and biological protocols and Lab on a Chip devices and systems. The electrodynamic techniques developed here have rapid droplet translation speeds and bring small droplets into contact where inertial dynamics achieve rapid mixing upon coalescence. In this work materials and fabrication processes for both electrowetting on dielectric and liquid dielectrophoresis technology have been developed and refined. The frequency, voltage and contact angle dependent behaviour of both techniques have been measured using two parallel coplanar electrodes. The frequency dependencies of electrowetting and dielectrophoretic liquid actuation indicate that these effects are high and low-frequency limits, respectively, of a complex set of forces. An electrowetting based particle mixer was developed using a custom made electrode array and the effect of varying voltage and frequency on droplet mixing was examined, with the highest efficiency mixing being achieved at 1 kHz and 110 V in about 0.55 seconds. A composite electrodynamic technique was used to develop a reliable method for the formation of artificial lipid bilayers within microfluidic platforms for measuring basic biophysical aspects of cell membranes, for biosensing and drug discovery applications. Formation of artificial bilayer lipid membranes (BLMs) was demonstrated at the interface of aqueous droplets submerged in an organic solvent-lipid phase using the liquid dielectrophoresis methods developed in this project to control the droplet movement and bring multiple droplets into contact without coalescence. This technique provides a flexible, reconfigurable method for forming, disassembling and reforming BLMs within a microsystem under simple electronic control. BLM formation was shown to be extremely reliable and the BLMs formed were stable (with lifetimes of up to 20 hours) and therefore were suitable for electrophysiological analysis. This system was used to assess whether nanoparticle-membrane contact leads to perturbation of the membrane structure. The conductance of artificial membranes was monitored following exposure to nanoparticles using this droplet BLM system. It was demonstrated that the presence of nanoparticles with diameters between 50 and 500 nm can damage proteinfree membranes at particle concentrations in the femtomolar range. The effects of particle size and surface chemistry were also investigated. It was shown that a large number of nanoparticles can translocate across a membrane, even when the surface coverage is relatively low, indicating that nanoparticles can exhibit significant cytotoxic effects
637

Circuit rating methods for high temperature cables

Pilgrim, James A. January 2011 (has links)
For the safe and efficient operation of power transmission systems, each system compo-nent must have an accurate current rating. Since the advent of formal power networks a wide variety of methods have been employed to calculate the current carrying capacity of power cables, ranging from simple analytical equations to complex numerical simulations. In the present climate of increasing power demand, but where finance for large scale network reinforcement schemes is limited, providing an accurate rating becomes paramount to the safe operation of the transmission network. Although the majority of the transmission network in the United Kingdom comprises overhead lines, many vital links make use of high voltage cable circuits. Advances in our ability to manipulate the properties of dielectric materials has led to increased interest among the cable community as to whether new cables could be designed which could deliver improved power transfer performance in comparison to traditional technologies. One way in which this might be possible is if the existing conductor temperature limit of 90C common to XLPE based cable systems could be lifted. At the present time a number of polymer systems exhibit potential in this area - however prior to investing significant resources in their development, it would be valuable to scope out the magnitude of the benefits that such cable systems could deliver to a network operator. In order to determine the scale of the operational benefit available, a comprehensive rating study would need to be undertaken. However most existing cable rating methodologies were not designed for situations with conductor temperatures in excess of 100C and may not be suitable for the task. To allow a quantitative analysis of the benefits available from permitting higher cable conductor temperatures, cable rating techniques for all major installation types have been reviewed and improved. In buried cable systems high temperature operation can lead to significant problems with moisture migration which is not easily modelled by traditional calculations. To overcome this a full dynamic backfill model has been created which explicitly models moisture movement and allows its impact on the thermal profile around a high temperature cable circuit to be established. Comparison is also made to existing forced cooling techniques to benchmark the scale of the benefits attainable from high temperature operation. Cable joints become critical in such forced cooled systems - to ensure that the joint temperatures do not exceed acceptable levels a full finite element based modelling process has been developed, allowing detailed rating studies to be undertaken. It is not always possible to bury cable circuits, for instance where they are installed in surface troughs or tunnels in urban areas. By applying modern computational fluid dynamics methods it is possible to develop more comprehensive rating methodologies for these air cooled cable systems, allowing the benefits of high temperature operation in such circumstances to be demonstrated. By utilizing these techniques for an example cable design it has been possible to provide an in depth discussion of the advantages available from high conductor temperature operation, while simultaneously noting the potential problems which would need to be mitigated should such a cable design be deployed in an operational setting
638

Analysing the content of Web 2.0 documents by using a hybrid approach

Zakaria, Lailatul Qadri binti January 2011 (has links)
User involvement in Web 2.0 has made a significant contribution to the increase in the amount of multimedia content on the Web. Images are one of the most used media, shared across the network to mark user experience in daily life. Interactive applications have allowed users to participate in describing these images, usually in the form of free text, thus gradually enriching the images' descriptions. Nevertheless, often these images are left with crude or no description. Web search engines such as Google and Yahoo provide text based searching to find images by mapping query concepts with the text description of the image, thus limiting the information discovery to material with good text descriptions. A similar issue is faced by text based search provided by Web 2.0 applications. Images with less description might not contain adequate information while images with no description will be useless as they will become unsearchable by a text based search. Therefore, there is an urgent need to investigate ways to produce high quality information to provide insight into the document content. The aim of this research is to investigate a means to improve the capability of information retrieval by utilizing Web 2.0 content, the Semantic Web and other emerging technologies. A hybrid approach is proposed which analyses two main aspects of Web 2.0 content, namely text and images. The text analysis consists of using Natural Language Processing and ontologies. The aim of the text analysis is to translate free text descriptions into a semantic information model tailored to Semantic Web standards. Image analysis is developed using machine learning tools and is assessed using ROC analysis. The aim of the image analysis is to develop an image classier exemplar to identify information in images based on their visual features. The hybrid approach is evaluated based on standard information retrieval performance metrics, precision and recall. The example semantic information model has structured and enriched the textual content thus providing better retrieval results compared to conventional tag based search. The image classifer is shown to be useful for providing additional information about image content. Each of the approaches has its own strengths and they complement each other in different scenarios. The thesis demonstrates that the hybrid approach has improved information retrieval performance compared to either of the contributing techniques used separately
639

Energy-efficient design and implementation of turbo codes for wireless sensor network

Li, Liang January 2012 (has links)
The objective of this thesis is to apply near Shannon limit Error-Correcting Codes (ECCs), particularly the turbo-like codes, to energy-constrained wireless devices, for the purpose of extending their lifetime. Conventionally, sophisticated ECCs are applied to applications, such as mobile telephone networks or satellite television networks, to facilitate long range and high throughput wireless communication. For low power applications, such as Wireless Sensor Networks (WSNs), these ECCs were considered due to their high decoder complexities. In particular, the energy efficiency of the sensor nodes in WSNs is one of the most important factors in their design. The processing energy consumption required by high complexity ECCs decoders is a significant drawback, which impacts upon the overall energy consumption of the system. However, as Integrated Circuit (IC) processing technology is scaled down, the processing energy consumed by hardware resources reduces exponentially. As a result, near Shannon limit ECCs have recently begun to be considered for use in WSNs to reduce the transmission energy consumption [1,2]. However, to ensure that the transmission energy consumption reduction granted by the employed ECC makes a positive improvement on the overall energy efficiency of the system, the processing energy consumption must still be carefully considered. The main subject of this thesis is to optimise the design of turbo codes at both an algorithmic and a hardware implementation level for WSN scenarios. The communication requirements of the target WSN applications, such as communication distance, channel throughput, network scale, transmission frequency, network topology, etc, are investigated. Those requirements are important factors for designing a channel coding system. Especially when energy resources are limited, the trade-off between the requirements placed on different parameters must be carefully considered, in order to minimise the overall energy consumption. Moreover, based on this investigation, the advantages of employing near Shannon limit ECCs in WSNs are discussed. Low complexity and energy-efficient hardware implementations of the ECC decoders are essential for the target applications.
640

A design framework for identifying optimum services using choreography and model transformation

Alahmari, Saad January 2012 (has links)
Service Oriented Architecture (SOA) has become an effective approach for implementing loosely-coupled and flexible systems based on a set of services. However, despite the increasing popularity of the SOA approach, no comprehensive methodology is currently available to identify “optimum” services. Difficulties include the abstraction gap between the business process model and service interface design as well as service quality trade-offs that affect the identification of the “optimum” services. The selection of these “optimum” services implies that SOA implementation should be driven by the business model and should also consider the appropriate level of granularity. The objective of this thesis is to identify the optimum service interface designs by bridging the abstraction gap and balancing the trade-offs between service quality attributes. This thesis proposes a framework using the choreography concept to bridge the abstraction gap between the business process model and service interface design together with service quality metrics to evaluate service quality attributes. The framework generates the service interface design automatically based on a chain of model transformations from a business process model through the use of the choreography concept (service choreography model). The framework also develops a service quality model to measure service granularity and service quality attributes of complexity, cohesion and coupling. These measurements aim to evaluate service interface designs and then select the optimum service interface design. Throughout this thesis, a pragmatic approach is used to validate the transformation models applying three application scenarios and evaluating consistency. The service quality model will be evaluated empirically using the generated service interface designs. Despite several remaining challenges for service-oriented systems to identify “optimum” services, this thesis demonstrates that optimum services can be effectively identified using the new framework, as explained herein.

Page generated in 0.0414 seconds