• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

Investigation into yield and reliability enhancement of TSV-based three-dimensional integration circuits

Zhao, Yi January 2014 (has links)
Three dimensional integrated circuits (3D ICs) have been acknowledged as a promising technology to overcome the interconnect delay bottleneck brought by continuous CMOS scaling. Recent research shows that through-silicon-vias (TSVs), which act as vertical links between layers, pose yield and reliability challenges for 3D design. This thesis presents three original contributions. The first contribution presents a grouping-based technique to improve the yield of 3D ICs under manufacturing TSV defects, where regular and redundant TSVs are partitioned into groups. In each group, signals can select good TSVs using rerouting multiplexers avoiding defective TSVs. Grouping ratio (regular to redundant TSVs in one group) has an impact on yield and hardware overhead. Mathematical probabilistic models are presented for yield analysis under the influence of independent and clustering defect distributions. Simulation results using MATLAB show that for a given number of TSVs and TSV failure rate, careful selection of grouping ratio results in achieving 100% yield at minimal hardware cost (number of multiplexers and redundant TSVs) in comparison to a design that does not exploit TSV grouping ratios. The second contribution presents an efficient online fault tolerance technique based on redundant TSVs, to detect TSV manufacturing defects and address thermal-induced reliability issue. The proposed technique accounts for both fault detection and recovery in the presence of three TSV defects: voids, delamination between TSV and landing pad, and TSV short-to-substrate. Simulations using HSPICE and ModelSim are carried out to validate fault detection and recovery. Results show that regular and redundant TSVs can be divided into groups to minimise area overhead without affecting the fault tolerance capability of the technique. Synthesis results using 130-nm design library show that 100% repair capability can be achieved with low area overhead (4% for the best case). The last contribution proposes a technique with joint consideration of temperature mitigation and fault tolerance without introducing additional redundant TSVs. This is achieved by reusing spare TSVs that are frequently deployed for improving yield and reliability in 3D ICs. The proposed technique consists of two steps: TSV determination step, which is for achieving optimal partition between regular and spare TSVs into groups; The second step is TSV placement, where temperature mitigation is targeted while optimizing total wirelength and routing difference. Simulation results show that using the proposed technique, 100% repair capability is achieved across all (five) benchmarks with an average temperature reduction of 75.2℃ (34.1%) (best case is 99.8℃ (58.5%)), while increasing wirelength by a small amount.
562

Coherent versus differential multiple-input multiple-output systems

Zhang, Peichang January 2015 (has links)
In recent years, Multiple-Input-Multiple-Output (MIMO) techniques have attracted substantial attention due to their capability of providing spatial diversity and/or multiplexing gains. Inspired by the concept of Spatial Modulation (SM), the novel concept of Space-Time-Shift-Keying (STSK) was recently proposed, which is considered to have the following advantages: 1) STSK constitutes a generalized shift keying architecture, which is capable of striking the required trade-off between the required spatial and time diversity as well as multiplexing gain and includes SM and Space Shift Keying (SSK) as its special cases. 2) Its high degree of design-freedom, the above-mentioned flexible diversity versus multiplexing gain trade-off can be achieved by optimizing both the number and size of the dispersion matrices, as well as the number of transmit and receive antennas. 3) Similar to the SM/SSK schemes, the Inter-Antenna-Interference (IAI) may be eliminated and consequently, the adoption of single-antenna-based Maximum Likelihood (ML) detection becomes realistic in STSK schemes. In this report, our investigation can be classified into two major categories, Coherent STSK (CSTSK) and Differential STSK (DSTSK) schemes. For CSTSK, since Channel State Information (CSI) is required for data detection, Channel Estimation (CE) techniques become necessary. To be more explicit, we first briefly review the conventional Training Based CE (TBCE) and Semi-Blind CE (SBCE) schemes for the CSTSK MIMO schemes. In addition, we develop a Blockof-Bits Selection Based CE (BBSBCE) algorithm for CSTSK schemes for increasing the overall system’s throughput, while improving the accuracy of the CE. Additionally, it has been widely recognised that MIMO schemes are capable of achieving a diversity and/or multiplexing gain by employing multiple Antenna Elements (AEs) at the transmitter and/or the receiver. However, it should also noted that since MIMO systems utilize multiple RF chains, their power consumption and hardware costs become substantial. Against this background, we introduce the concept of (Antenna Selection) AS and propose a simple yet efficient AS algorithm, namely the Norm-Based Joint Transmit and Receive AS (NBJTRAS) for assisting MIMO systems. For DSTSK, since no CSI is required for differential detection schemes, it also draws our attention. However, in the absence of CE, the Conventional Differential Detection (CDD) schemes usually suffer from a 3 dB performance degradation and may exhibit an error-flow when Doppler frequency is excessive. In order to mitigate this problem, we investigate Multiple-Symbol Differential Sphere Detection (MSDSD) scheme and adopt it in our DSTSK scheme to improve the system performance, while reducing the detection complexity. Furthermore, based on our MSDSD detected DSTSK scheme, we propose a DSTSK aided Multi-User Successive Relaying aided Cooperative System (MUSRC), which is capable of supporting various number of users flexibly, while covering the conventional 50% throughput loss due to the half-duplex transmit and receive constraint of practical transceivers.
563

Service-oriented grids and problem solving environments

Fairman, Matthew J. January 2004 (has links)
The Internet’s continued rapid growth is creating an untapped environment containing a large quantity of highly competent computing resources suitable for exploitation in existing capacity-constrained and new innovative capability-driven distributed applications. The Grid is a new computing model that has emerged to harness these resources in a manner that fits the problem solving process needs of the computational engineering design community. Their unique requirements have created specific challenges for Grid technologies to bring interoperability, stability, scalability and flexibility, in addition to, transparent integration and generic access to disparate computing resources within and across institutional boundaries. The emergence of maturing open standards based service-oriented (SO) technologies has fulfilled the fundamental requirements of interoperability, leaves a flexible framework onto which sophisticated system architectures may be built, and provides a suitable base for the development of future Grid technologies. The work presented in this thesis is motivated by the desire to identify, understand, and resolve important challenges involved in the construction of Grid-enabled Problem Solving Environments (PSE) using SO technologies. The work explains why they are appropriate for Grid computing and successfully demonstrates the application and benefits of applying SO technologies in the scenarios of Computational Micromagnetics and Grid-enabled Engineering Optimisation and Design Search (Geodise) systems. Experiences achieved through the work can also be of referential value to future application of Grid computing in different areas.
564

Preprocessing for content-based image retrieval

Rodhetbhai, Wasara January 2009 (has links)
The research focuses on image retrieval problems where the query is formed as an image of a specific object of interest. The broad aim is to investigate pre-processing for retrieval of images of objects when an example image containing the object is given. The object may be against a variety of backgrounds. Given the assumption that the object of interest is fairly centrally located in the image, the normalized cut segmentation and region growing segmentation are investigated to segment the object from the background but with limited success. An alternative approach comes from identifying salient regions in the image and extracting local features as a representation of the regions. The experiments show an improvement for retrieval by local features when compared with retrieval using global features from the whole image. For situations where object retrieval is required and where the foreground and background can be assumed to have different characteristics, it is useful to exclude salient regions which are characteristic of the background if they can be identified before matching is undertaken. This thesis proposes techniques to filter out salient regions believed to be associated with the background area. Background filtering using background clusters is the first technique which is proposed in the situation where only the background information is available for training. The second technique is the K-NN classification based on the foreground and background probability. In the last chapter, the support vector machine (SVM) method with PCA-SIFT descriptors is applied in an attempt to improve classification into foreground and background salient region classes. Retrieval comparisons show that the use of salient region background filtering gives an improvement in performance when compared with the unfiltered method.
565

Clustering solutions : a novel approach to solving NP-complete problems

Qasem, Mohamed January 2010 (has links)
In this thesis, we introduce a novel approach to solving MAX-SAT problems. This algorithm clusters good solutions, and restarts the search from the closest feasible configuration to the centroid of each cluster. We call this method Clustered-Landscape Guided Hopping (CLGH). In addition, where clustering does not provide an advantage due to the non-clustered landscape configuration, we use Averaged-Landscape Guided Hopping (ALGH). CLGH is shown to be highly efficient for finding good solutions of large MAX-SAT problems. Systematic studies of the landscape are presented to show that the success of clustering is due to the learning of large-scale structure of the fitness landscape. Previous studies conducted by other researchers analysed the relationship between local and global minima and provided an insight into the configuration of the landscape. It was found that local minima formed clusters around global ones. We expanded these analyses to cover the relationship between clusters, and found that local minima form many correlated yet distant clusters. In addition, we show the existence of a relationship between the size of the problem and the distance between local minima. To rule out other possibilities of this success we test several other population based algorithms, and compare their performances to clustering. In addition, we compare with solo-search algorithms. We show that this method is superior to all algorithms tested. CLGH produces results that might be produced by a solo-local search algorithm within 95% less time. However, this is not a standalone technique, and can be incorporated within other algorithms to further enhance their performance. A further application of clustering is carried out on the Traveling Salesman Problem (TSP) in the discrete domain, and Artificial Neural Networks (ANN) using backpropagation for the purpose of data classification in the continuous domain. Since TSP does not show a clustered landscape configuration we find that ALGH is an effective method for improving search results. Preliminary results are shown indicating that extensions of the proposed algorithm can give similar improvements on these hard optimisation problems.
566

Comparison and performance enhancement of modern pattern classifiers

Suppharangsan, Somjet January 2010 (has links)
This thesis is a critical empirical study, using a range of benchmark datasets, on the performance of some modern machine learning systems and possible enhancements to them. When new algorithms and their performance are reported in the machine learning literature, most authors pay little attention to reporting the statistical significances in performance dififerences. We take Gaussian process classifiers as an example, which shows disappointing number of performance evaluations in the literature. What is particularly ignored is any use of the uncertainties in the performance measures when making comparisons. This thesis makes a novel contribution by developing a methodology for formal comparisons that also include performance uncertainties. Using support vector machine (SVM) as classification architectures, the thesis explores two potential enhancements to complexity reduction: (a) subset selection on the training data by some pre-processing approaches, and (b) organising the classes of a multi-class problem in a tree structure for fast classification. The former is crucial, as dataset sizes are known to have increased rapidly, and the straightforward training using quadratic programming over all of the given data is prohibitively expensive. While some researchers focus on training algorithms that operate in a stochastic manner, we explore data reduction by cluster analysis. Multi-class problems in which the number of classes is very large are of increasing interest. Our contribution is to speed up the training by removing as many irrelevant data as possible and preserving the potential data that are believed to be support vectors. The results show that too high a data reduction rate can degrade performance. However, on a subset of problems, the proposed methods have produced comparable results to the full SVM despite the high reduction rate. The new learning tree structure can then be combined with the data selection methods to obtain a further increase in speed. Finally, we also critically review SVM classification problems in which the input data is binary. In the chemoinformatics and bioinformatics literature, the Tanimoto kernel has been empirically shown to have good performance. The work we present, using carefully set up synthetic data of varying dimensions and dataset sizes, casts doubt on such claims. Improvements are noticeable, but not to the extent claimed in previous studies.
567

Decentralised coordination of information gathering agents

Stranders, Ruben January 2010 (has links)
Unmanned sensors are rapidly becoming the de facto means of achieving situational awareness---the ability to make sense of, and predict what is happening in an environment---in disaster management, military reconnaissance, space exploration, and climate research. In these domains, and many others besides, their use reduces the need for exposing humans to hostile, impassable or polluted environments. Whilst these sensors are currently often pre-programmed or remotely controlled by human operators, there is a clear trend toward making these sensors fully autonomous, thus enabling them to make decisions without human intervention. Full autonomy has two clear benefits over pre-programming and human remote control. First, in contrast to sensors with pre-programmed motion paths, autonomous sensors are better able to adapt to their environment, and react to a priori unknown external events or hardware failure. Second, autonomous sensors can operate in large teams that would otherwise be too complex to control by human operators. The key benefit of this is that a team of cheap, small sensors can achieve through cooperation the same results as individual large, expensive sensors---with more flexibility and robustness. In light of the importance of autonomy and cooperation, we adopt an agent-based perspective on the operation of the sensors. Within this view, each sensor becomes an information gathering agent. As a team, these agents can then direct their collective activity towards collecting information from their environment with the aim of providing accurate and up-to-date situational awareness. Against this background, the central problem we address in this thesis is that of achieving accurate situational awareness through the coordination of multiple information gathering agents. To achieve general and principled solutions to this problem, we formulate a generic problem definition, which captures the essential properties of dynamic environments. Specific instantiations of this generic problem span a broad spectrum of concrete application domains, of which we study three canonical examples: monitoring environmental phenomena, wide area surveillance, and search and patrol. The main contributions of this thesis are decentralised coordination algorithms that solve this general problem with additional constraints and requirements, and can be grouped into two categories. The first category pertains to decentralised coordination of fixed information gathering agents. For these agents, we study the application of decentralised coordination during two distinct phases of the agents' life cycle: deployment and operation. For the former, we develop an efficient algorithm for maximising the quality of situational awareness, while simultaneously constructing a reliable communication network between the agents. Specifically, we present a novel approach to the NP-hard problem of frequency allocation, which deactivates certain agents such that the problem can be provably solved in polynomial time. For the latter, we address the challenge of coordinating these agents under the additional assumption that their control parameters are continuous. In so doing, we develop two extensions to the max-sum message passing algorithm for decentralised welfare maximisation, which constitute the first two algorithms for distributed constraint optimisation problems (DCOPs) with continuous variables---CPLF-MS (for linear utility functions) and HCMS (for non-linear utility functions). The second category relates to decentralised coordination of mobile information gathering agents whose motion is constrained by their environment. For these agents, we develop algorithms with a receding planning horizon, and a non-myopic planning horizon. The former is based on the max-sum algorithm, thus ensuring an efficient and scalable solution, and constitutes the first online agent-based algorithm for the domains of pursuit-evasion, patrolling and monitoring environmental phenomena. The second uses sequential decision making techniques for the offline computation of patrols---infinitely long paths designed to continuously monitor a dynamic environment---which are subsequently improved on at runtime through decentralised coordination. For both topics, the algorithms are designed to satisfy our design requirements of quality of situational awareness, adaptiveness (the ability to respond to a priori unknown events), robustness (the ability to degrade gracefully), autonomy (the ability of agents to make decisions without the intervention of a centralised controller), modularity (the ability to support heterogeneous agents) and performance guarantees (the ability to give a lower bound on the quality of the achieved situational awareness). When taken together, the contributions presented in this thesis represent an advance in the state of the art of decentralised coordination of information gathering agents, and a step towards achieving autonomous control of unmanned sensors.
568

Measurement and simulation of partial discharges within a spherical cavity in a solid dielectric material

Illias, Hazlee Azil January 2011 (has links)
For high voltage components, the measurement of partial discharge (PD) is used in the performance assessment of an insulation system. Through modelling the PD process, a better understanding of the phenomenon may be attained. In this work, a model for a spherical cavity within a homogeneous dielectric material has been developed using Finite Element Analysis (FEA) software in parallel with MATLAB programming code. The model has been used to study the effect of various applied stresses and cavity conditions on PD activity and also the electric field and temperature distributions within the cavity. The experimental measurement of PD activity within a spherical cavity has also been undertaken. The measurements were performed for different amplitudes and frequencies of the applied voltage, a range of spherical cavity sizes and temperature variation of the material. The obtained results show that PD is strongly influenced by various conditions of the cavity and applied stress. The cycle to cycle behaviour of PD events, discharge phase and magnitude distributions, numbers of PDs per cycle, total charge magnitude per cycle, mean charge magnitude and maximum charge magnitude for each experiment have been obtained and analysed. The simulation results from the PD model have been compared with the measurement results. It is found that certain model parameters are dependent on the applied stress and cavity conditions. Parameters that clearly affect PD activity can be readily identifed. These parameters include; the effective charge decay time constant, the cavity surface conductivity, the initial electron generation rate, the inception field, the extinction field and the temperature decay time constant in the cavity. The infuences of surface charge decay through conduction along the cavity wall and temperature and pressure change in the cavity on PD activity have also been studied
569

End to end solutions for a droplet microfluidic autonomous experimentation system

Jones, Gareth January 2012 (has links)
Scientific discovery is limited by finite experimental resources. Therefore, careful strategic planning is required when committing resources to an experiment. Often the decision to commit resources is based upon observations made from previous experiments. However real-world data is inherently noisy and often follows an underlying nonlinear trend. In such circumstances the decision to commit resources is unclear. Autonomous experimentation, where machine learning algorithms control an experimentation platform, is one approach that has the potential to deal with these issues and consequently could help drive scientific discoveries. In the context of applying autonomous experimentation to identify new behaviours from chemical or biological systems, the machine learning algorithms are limited by the capability of the hardware technology to generate on demand, complex mixtures from a wide range of chemicals. This limitation forms the basis for the work described in this thesis. Specifically this thesis documents the development of a hardware system which is designed to support scalability, is capable of automating processes, and is built from technology readily accessible to other researchers. The hardware system is derived from droplet microfluidic technology and allows for microscale biochemical samples of varying composition to be automatically created. During the development of the hardware system, technical challenges in fabrication, sensor system development, microfluidic design and mixing were encountered. Solutions to address these challenges were found and are presented as, fabrication techniques that enable integrated valve microfluidic devices to be created in a standard chemistry laboratory environment without need for sophisticated equipment, a compact UV photometer system built using optical semiconductor components, and a novel mixing strategy that increased the mixing efficiency of large droplets. Having addressed these technical challenges and in fulfilling the aims set out above, the work in this thesis has sufficiently improved hardware technology to free the machine learning algorithms from the constraint of working with just a few experimental variables.
570

The use of Raman microprobe spectroscopy in the analysis of electrically aged polymeric insulators

Freebody, Nicola January 2012 (has links)
Due to its applications in high voltage insulation, a thorough understanding of the chemical reactions that occur during electrical ageing in polymers is needed. A confocal Raman microscope has a potential lateral resolution of ~1μm along both the lateral and optic axes and is able to characterise the localised chemical composition of a material; for this reason, it has been applied in the study of electrical ageing in solid dielectrics. Due to inaccurate assumptions about the optical processes involved in confocal Raman microprobe spectroscopy (CRMS), however, the exact characterisation of the processes and chemicals involved has previously proven to be difficult. The objective of this study is to apply the technique of Raman microprobe spectroscopy in the analysis of the chemical structures of electrically aged polymers. It was found that, with the application of immersion oil and by using a refined version of a model of CRMS which is based on a photon scattering approach; CRMS is a valuable tool in the study of polymers. More accurate results can be obtained, however, by revealing the feature in question to the surface and applying non confocal Raman microprobe spectroscopy (RMS). CRMS was applied to a variety of polymeric samples containing electrically aged voids and electrical trees. Results showed that within the electrically aged voids, chemical signatures similar to those previously found in electrical trees in PE can be found. Finally, a variety of polymeric insulators was subjected to spark ageing and corona discharge. The by-products of these ageing mechanisms were then characterized using RMS in an attempt to reproduce in bulk the chemical compounds formed in electrical treeing. The resulting Raman spectra indicated that the same by-products as those formed in voids and trees are indeed formed. Where possible all results were compared to comparative data obtained using Fourier transform infra red (FTIR) spectroscopy and scanning electron microscopy (SEM) and discussed in relation to previously published work.

Page generated in 0.1023 seconds