• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Adaptive low-energy techniques in memory and digital signal processing design

He, Ku, 1982- 12 July 2012 (has links)
As semiconductor technology continues to scale, energy-efficiency and power consumption have become the dominant design limitations, especially, for embedded and portable systems. Conventional worst-case design is highly inefficient from an energy perspective. In this dissertation, we propose techniques for adaptivity at the architecture and circuit levels in order to remove some of these inefficiencies. Specifically, this dissertation focuses on research contributions in two areas: 1) the development of SRAM models and circuitry to enable an intra-array voltage island approach for dealing with large random process variation; and 2) the development of low-energy digital signal processing (DSP) techniques based on controlled timing error acceptance. In the presence of increased process variation, which characterizes nanometer scale CMOS technology, traditional design strategies result in designs that are overly conservative in terms of area, power consumption, and design effort. Memory arrays, such as SRAM-based cache, are especially vulnerable to process variation, where the penalty is a power and bit-cell increase needed to satisfy a variety of noise margins. To improve yield and reduce power consumption in large SRAM arrays, we propose an intra-array voltage island technique and develop circuits that allow for a cost-effective deployment of this technique to reduce the impact of process variation. The voltage tuning architecture makes it possible to obtain, on average, power consumption reduction of 24% iso-area in the active mode, and the leakage power reduction up to 52%, and, on average, of 44% iso-area in the sleep mode. Alternatively, bitcell area can be reduced up to 50% iso-power compared to the existing design strategy. In many portable and embedded systems, signal processing (SP) applications are dominant energy consumers. In this dissertation we investigate the potential of error-permissive design strategies to reduce energy consumption in such SP applications. Conventional design strategies are aimed at guaranteeing timing correctness for the input data that triggers the worst-case delay, even if such data occurs infrequently. We notice that an intrinsic notion of quality floor characterizes SP applications. This provides the opportunity to significantly reduce energy consumption in exchange for a limited signal quality reduction by strategically accepting small and infrequent timing errors. We propose both design-time and run-time techniques to carefully control the quality-energy tradeoff under scaled VDD. The basic philosophy is to prevent signal quality from severe degradation, on average, by using data statistics. We introduce techniques for: 1) static and dynamic adjustment of datapath bitwidths, 2) design-time and run-time reordering of computations, 3) protection of important algorithm steps, and 4) exploiting the specific patterns of errors for low-cost post-processing to minimize signal quality degradation. We demonstrate the effectiveness of the proposed techniques on a 2D-IDCT/DCT design, as well as several digital filters for audio and image processing applications. The designs were synthesized using a 45nm standard cell library with energy and delay evaluated using NanoSim and VCS. Experiments show that the introduced techniques enable 40~70% energy savings while only adding less than 6% area overhead when applied to image processing and filtering applications. / text
2

Modeling and synthesis of quality-energy optimal approximate adders

Miao, Jin 04 March 2013 (has links)
Recent interest in approximate computation is driven by its potential to achieve large energy savings. We formally demonstrate an optimal way to reduce energy via voltage over-scaling at the cost of errors due to timing starvation in addition. A fundamental trade-off between error frequency and error magnitude in a timing-starved adder has been identified. We introduce a formal model to prove that for signal processing applications using a quadratic signal-to-noise ratio error measure, reducing bit-wise error frequency is sub-optimal. Instead, energy-optimal approximate addition requires limiting maximum error magnitude. Intriguingly, due to possible error patterns, this is achieved by reducing carry chains significantly below what is allowed by the timing budget for a large fraction of sum bits, using an aligned, fixed internal-carry structure for higher significance bits. We further demonstrate that remaining approximation error is reduced by realization of conditional bounding (CB) logic for lower significance bits. A key contribution is the formalization of an approximate CB logic synthesis problem that produces a rich space of Pareto-optimal adders with a range of quality-energy trade-offs. We show how CB logic can be customized to result in over- and under-estimating approximate adders, and how a dithering adder that mixes them produces zero-centered error distributions, and, in accumulation, a reduced-variance error. This work demonstrates synthesized approximate adders with energy up to 60% smaller than that of a conventional timing-starved adder, where a 30% reduction is due to the superior synthesis of inexact CB logic. When used in a larger system implementing an image-processing algorithm, energy savings of 40% are possible. / text
3

Modeling and synthesis of approximate digital circuits

Miao, Jin 16 January 2015 (has links)
Energy minimization has become an ever more important concern in the design of very large scale integrated circuits (VLSI). In recent years, approximate computing, which is based on the idea of trading off computational accuracy for improved energy efficiency, has attracted significant attention. Applications that are both compute-intensive and error-tolerant are most suitable to adopt approximation strategies. This includes digital signal processing, data mining, machine learning or search algorithms. Such approximations can be achieved at several design levels, ranging from software, algorithm and architecture, down to logic or transistor levels. This dissertation investigates two research threads for the derivation of approximate digital circuits at the logic level: 1) modeling and synthesis of fundamental arithmetic building blocks; 2) automated techniques for synthesizing arbitrary approximate logic circuits under general error specifications. The first thread investigates elementary arithmetic blocks, such as adders and multipliers, which are at the core of all data processing and often consume most of the energy in a circuit. An optimal strategy is developed to reduce energy consumption in timing-starved adders under voltage over-scaling. This allows a formal demonstration that, under quadratic error measures prevalent in signal processing applications, an adder design strategy that separates the most significant bits (MSBs) from the least significant bits (LSBs) is optimal. An optimal conditional bounding (CB) logic is further proposed for the LSBs, which selectively compensates for the occurrence of errors in the MSB part. There is a rich design space of optimal adders defined by different CB solutions. The other thread considers the problem of approximate logic synthesis (ALS) in two-level form. ALS is concerned with formally synthesizing a minimum-cost approximate Boolean function, whose behavior deviates from a specified exact Boolean function in a well-constrained manner. It is established that the ALS problem un-constrained by the frequency of errors is isomorphic to a Boolean relation (BR) minimization problem, and hence can be efficiently solved by existing BR minimizers. An efficient heuristic is further developed which iteratively refines the magnitude-constrained solution to arrive at a two-level representation also satisfying error frequency constraints. To extend the two-level solution into an approach for multi-level approximate logic synthesis (MALS), Boolean network simplifications allowed by external don't cares (EXDCs) are used. The key contribution is in finding non-trivial EXDCs that can maximally approach the external BR and, when applied to the Boolean network, solve the MALS problem constrained by magnitude only. The algorithm then ensures compliance to error frequency constraints by recovering the correct outputs on the sought number of error-producing inputs while aiming to minimize the network cost increase. Experiments have demonstrated the effectiveness of the proposed techniques in deriving approximate circuits. The approximate adders can save up to 60% energy compared to exact adders for a reasonable accuracy. When used in larger systems implementing image-processing algorithms, energy savings of 40% are possible. The logic synthesis approaches generally can produce approximate Boolean functions or networks with complexity reductions ranging from 30% to 50% under small error constraints. / text
4

Contrôle d'accès et présentation contextuelle pour le Web des données / Context-aware access control and presentation of linked data

Costabello, Luca 29 November 2013 (has links)
La thèse concerne le rôle joué par le contexte dans l'accès au Web de données depuis les dispositifs mobiles. Le travail analyse ce problème de deux points de vue distincts: adapter au contexte la présentation de triplets, et protéger l'accès aux bases des données RDF depuis les dispositifs mobiles. La première contribution est PRISSMA, un moteur de rendu RDF qui étend Fresnel avec la sélection de la meilleure représentation pour le contexte physique où on se trouve. Cette opération est effectuée par un algorithme de recherche de sous-graphes tolérant aux erreurs basé sur la notion de distance d'édition sur les graphes. L'algorithme considère les différences entre les descriptions de contexte et le contexte détecté par les capteurs, supporte des dimensions de contexte hétérogènes et est exécuté sur le client pour ne pas révéler des informations privées. La deuxième contribution concerne le système de contrôle d'accès Shi3ld. Shi3ld supporte tous les triple stores et il ne nécessite pas de les modifier. Il utilise exclusivement les langages du Web sémantique, et il n'ajoute pas des nouveaux langages de définition de règles d'accès, y compris des analyseurs syntaxiques et des procédures de validation. Shi3ld offre une protection jusqu'au niveau des triplets. La thèse décrit les modèles, algorithmes et prototypes de PRISSMA et de Shi3ld. Des expériences montrent la validité des résultats de PRISSMA ainsi que les performances au niveau de mémoire et de temps de réponse. Le module de contrôle d'accès Shi3ld a été testé avec différents triple stores, avec et sans moteur SPARQL. Les résultats montrent l'impact sur le temps de réponse et démontrent la faisabilité de l'approche. / This thesis discusses the influence of mobile context awareness in accessing the Web of Data from handheld devices. The work dissects this issue into two research questions: how to enable context-aware adaptation for Linked Data consumption, and how to protect access to RDF stores from context-aware devices. The thesis contribution to this first research question is PRISSMA, an RDF rendering engine that extends Fresnel with a context-aware selecting of the best presentation according to mobile context. This operation is performed by an error-tolerant subgraph matching algorithm based on the notion of graph edit distance. The algorithm takes into account the discrepancies between context descriptions and the sensed context, supports heterogeneous context dimensions, and runs on the client-side - to avoid disclosing sensitive context information. The second research activity presented in the thesis is the Shi3ld access control framework for Linked Data servers. Shi3ld has the advantage of being a pluggable filter for generic triple stores, with no need to modify the endpoint itself. It adopts exclusively Semantic Web languages and it does not add new policy definition languages, parsers nor validation procedures. Shi3ld provides protection up to triple level. The thesis describes both PRISSMA and Shi3ld prototypes. Test campaigns show the validity of PRISSMA results, along with memory and response time performance. The Shi3ld access control module has been tested on different triple stores, with and without SPARQL engines. Results show the impact on response time, and demonstrate the feasibility of the approach.
5

Low Overhead Soft Error Mitigation Methodologies

Prasanth, V January 2012 (has links) (PDF)
CMOS technology scaling is bringing new challenges to the designers in the form of new failure modes. The challenges include long term reliability failures and particle strike induced random failures. Studies have shown that increasingly, the largest contributor to the device reliability failures will be soft errors. Due to reliability concerns, the adoption of soft error mitigation techniques is on the increase. As the soft error mitigation techniques are increasingly adopted, the area and performance overhead incurred in their implementation also becomes pertinent. This thesis addresses the problem of providing low cost soft error mitigation. The main contributions of this thesis include, (i) proposal of a new delayed capture methodology for low overhead soft error detection, (ii) adopting Error Control Coding (ECC) for delayed capture methodology for correction of single event upsets, (iii) analyzing the impact of different derating factors to reduce the hardware overhead incurred by the above implementations, and (iv) proposal for hardware software co-design for reliability based upon critical component identification determined by the application executing on the hardware (as against standalone hardware analysis). This thesis first surveys existing soft error mitigation techniques and their associated limitations. It proposes a new delayed capture methodology as a low overhead soft error detection technique. Delayed capture methodology is an enhancement of the Razor flip-flop methodology. In the delayed capture methodology, the parity for a set of flip-flops is calculated at their inputs and outputs. The input parity is latched on a second clock, which is delayed with respect to the functional clock by more than the soft error pulse width. It requires an extra flip-flop for each set of flip-flops. On the other hand, in the Razor flip-flop methodology an additional flip-flop is required for every functional flip-flop. Due to the skew in the clocks, either the parity flip-flop or the functional flip-flop will capture the effect of transient, and hence by comparing the output parity and latched input parity an error can be detected. Fault injection experiments are performed to evaluate the bneefits and limitations of the proposed approach. The limitations include soft error detection escapes and lack of error correction capability. Different cases of soft error detection escapes are analyzed. They are attributed mainly to a Single Event Upset (SEU) causing multiple flip-flops within a group to be in error. The error space due to SEUs is analyzed and an intelligent flip-flop grouping method using graph theoretic formulations is proposed such that no SEU can cause multiple flip-flops within a group to be in error. Once the error occurs, leaving the correction aspects to the application may not be desirable. The proposed delayed capture methodology is extended to replace parity codes with codes having higher redundancy to enable correction. The hardware overhead due to the proposed methodology is analyzed and an area savings of about 15% is obtained when compared to an existing soft error mitigation methodology with equivalent coverage. The impact of different derating factors in determining the hardware overhead due to the soft error mitigation methodology is then analyzed. We have considered electrical derating and timing derating information for the evaluation purpose. The area overhead of the circuit with implementation of delayed capture methodology, considering different derating factors standalone and in combination is then analyzed. Results indicate that in different circuits, either a combination of these derating factors yield optimal results, or each of them considered standalone. This is due to the dependency of the solution on the heuristic nature of the algorithms used. About 23% area savings are obtained by employing these derating factors for a more optimal grouping of flip-flops. A new paradigm of hardware software co-design for reliability is finally proposed. This is based on application derating in which the application / firmware code is profiled to identify the critical components which must be guarded from soft errors. This identification is based on the ability of the application software to tolerate certain errors in hardware. An algorithm to identify critical components in the control logic based on fault injection is developed. Experimental results indicated that for a safety critical automotive application, only 12% of the sequential logic elements were found to be critical. This approach provides a framework for investigating how software methods can complement hardware methods, to provide a reduced hardware solution for soft error mitigation.
6

Návrh vyhledávacího systému pro moderní potřeby / Design of search engine for modern needs

Maršálek, Tomáš January 2016 (has links)
In this work I argue that field of text search has focused mostly on long text documents, but there is a growing need for efficient short text search, which has different user expectations. Due to this reduced data set size requirements different algorithmic techniques become more computationally affordable. The focus of this work is on approximate and prefix search and purely text based ranking methods, which are needed due to lower precision of text statistics on short text. A basic prototype search engine has been created using the researched techniques. Its capabilities were demonstrated on example search scenarios and the implementation was compared to two other open source systems representing currently recommended approaches for short text search problem. The results show feasibility of the implemented prototype regarding both user expectations and performance. Several options of future direction of the system are proposed.

Page generated in 0.0579 seconds