751 |
Efficient design and decoding of the rate-compatible low-density parity-check codes /Wu, Xiaoxiao. January 2009 (has links)
Includes bibliographical references (p. 64-67).
|
752 |
High code rate, low-density parity-check codes with guaranteed minimum distance and stopping weight /Miller, John January 2003 (has links)
Thesis (Ph. D.)--University of California, San Diego, 2003. / Vita. Includes bibliographical references.
|
753 |
The efficiacy of written corrective feedback and students´perceptions : A survey about the impact of written response on L2 writingMunther, Pernilla January 2015 (has links)
The purpose of this study was to investigate to what extent written corrective feedback (WCF) is a good way to treat errors that L2 (second language) pupils make and if they attend to the comments in future written assignments. WCF is the most used response on written assignments. Some research takes the perspective that it is fruitful (Chandler 2003, Ferris 2003) while other research argues that it is inefficient and unnecessary (e.g.Truscott 1996, 1999). This study presents the findings of a survey on the topic which was conducted at a small school in the south east of Sweden. A comparison between previous research and the findings of the present survey is made and the conclusion from this is that there are limitations in the efficacy of WCF and the results suggest that the type of feedback and how it is delivered are important. It is also likely to be beneficial that pupils revise their texts in order to improve in writing English.
|
754 |
Analysis and Visualization of Validation ResultsForss, Carl-Philip January 2015 (has links)
Usage of simulation models is an essential part in many modern engineering disci- plines. Computer models of complex physical systems can be used to expedite the design of control systems and reduce the number of physical tests. Model valida- tion tries to answer the question if the model is a good enough representation of the physical system. This thesis describes techniques to visualize multi-dimensional validation results and the search for an automated validation process. The work is focused on a simulation model of the Primary Environmental Control System of Gripen E, but can be applied on validation results from other simulation models. The result from the thesis can be divided into three major components, static validation, dynamic validation and model coverage. To present the results from the static validation different multi-dimensional visualization techniques are in- vestigated and evaluated. The visualizations are compared to each other and to properly depict the static validation status of the model, a combination of visual- izations are required. Two methods for validation of the dynamic performance of the model are examined. The first method uses the singular values of an error model estimated from the residual. We show that the singular values of the error model relay important information about the model’s quality but interpreting the result is a considerable challenge. The second method aims to automate a visual inspection procedure where interesting quantities are automatically computed. Coverage is a descriptor of how much of the applicable operating conditions that has been validated. Two coverage metrics, volumetric coverage and nearest neigh- bour coverage, are examined and the strengths and weaknesses of these metrics are presented. The nearest neighbour coverage metric is further developed to account for validation performance, resulting in a total static validation quantity.
|
755 |
A Framework for Software Security Testing and EvaluationDutta, Rahul Kumar January 2015 (has links)
Security in automotive industry is a thought of concern these days. As more smart electronic devices are getting connected to each other, the dependency on these devices are urging us to connect them with moving objects such as cars, buses, trucks etc. As such, safety and security issues related to automotive objects are becoming more relevant in the realm of internet connected devices and objects. In this thesis, we emphasize on certain factors that introduces security vulnerabilities in the implementation phase of Software Development Life Cycle (SDLC). Input invalidation is one of them that we address in our work. We implement a security evaluation framework that allows us to improve security in automotive software by identifying and removing software security vulnerabilities that arise due to input invalidation reasons during SDLC. We propose to use this framework in the implementation and testing phase so that the critical deficiencies of software in security by design issues could be easily addressed and mitigated.
|
756 |
Adaptive low-energy techniques in memory and digital signal processing designHe, Ku, 1982- 12 July 2012 (has links)
As semiconductor technology continues to scale, energy-efficiency and power consumption have become the dominant design limitations, especially, for embedded and portable systems. Conventional worst-case design is highly inefficient from an energy perspective. In this dissertation, we propose techniques for adaptivity at the architecture and circuit levels in order to remove some of these inefficiencies. Specifically, this dissertation focuses on research contributions in two areas: 1) the development of SRAM models and circuitry to enable an intra-array voltage island approach for dealing with large random process variation; and 2) the development of low-energy digital signal processing (DSP) techniques based on controlled timing error acceptance.
In the presence of increased process variation, which characterizes nanometer scale CMOS technology, traditional design strategies result in designs that are overly conservative in terms of area, power consumption, and design effort. Memory arrays, such as SRAM-based cache, are especially vulnerable to process variation, where the penalty is a power and bit-cell increase needed to satisfy a variety of noise margins. To improve yield and reduce power consumption in large SRAM arrays, we propose an intra-array voltage island technique and develop circuits that allow for a cost-effective deployment of this technique to reduce the impact of process variation. The voltage tuning architecture makes it possible to obtain, on average, power consumption reduction of 24% iso-area in the active mode, and the leakage power reduction up to 52%, and, on average, of 44% iso-area in the sleep mode. Alternatively, bitcell area can be reduced up to 50% iso-power compared to the existing design strategy.
In many portable and embedded systems, signal processing (SP) applications are dominant energy consumers. In this dissertation we investigate the potential of error-permissive design strategies to reduce energy consumption in such SP applications. Conventional design strategies are aimed at guaranteeing timing correctness for the input data that triggers the worst-case delay, even if such data occurs infrequently. We notice that an intrinsic notion of quality floor characterizes SP applications. This provides the opportunity to significantly reduce energy consumption in exchange for a limited signal quality reduction by strategically accepting small and infrequent timing errors. We propose both design-time and run-time techniques to carefully control the quality-energy tradeoff under scaled VDD. The basic philosophy is to prevent signal quality from severe degradation, on average, by using data statistics. We introduce techniques for: 1) static and dynamic adjustment of datapath bitwidths, 2) design-time and run-time reordering of computations, 3) protection of important algorithm steps, and 4) exploiting the specific patterns of errors for low-cost post-processing to minimize signal quality degradation. We demonstrate the effectiveness of the proposed techniques on a 2D-IDCT/DCT design, as well as several digital filters for audio and image processing applications. The designs were synthesized using a 45nm standard cell library with energy and delay evaluated using NanoSim and VCS. Experiments show that the introduced techniques enable 40~70% energy savings while only adding less than 6% area overhead when applied to image processing and filtering applications. / text
|
757 |
Linear estimation for data with error ellipsesAmen, Sally Kathleen 21 August 2012 (has links)
When scientists collect data to be analyzed, regardless of what quantities are being measured, there are inevitably errors in the measurements. In cases where two independent variables are measured with errors, many existing techniques can produce an estimated least-squares linear fit to the data, taking into consideration the size of the errors in both variables. Yet some experiments yield data that do not only contain errors in both variables, but also a non-zero covariance between the errors. In such situations, the experiment results in measurements with error ellipses with tilts specified by the covariance terms.
Following an approach suggested by Dr. Edward Robinson, Professor of Astronomy at the University of Texas at Austin, this report describes a methodology that finds the estimates of linear regression parameters, as well as an estimated covariance matrix, for a dataset with tilted error ellipses. Contained in an appendix is the R code for a program that produces these estimates according to the methodology. This report describes the results of the program run on a dataset of measurements of the surface brightness and Sérsic index of galaxies in the Virgo cluster. / text
|
758 |
Modeling and synthesis of quality-energy optimal approximate addersMiao, Jin 04 March 2013 (has links)
Recent interest in approximate computation is driven by its potential to achieve large energy savings. We formally demonstrate an optimal way
to reduce energy via voltage over-scaling at the cost of errors due to timing starvation in addition. A fundamental trade-off between error frequency and error magnitude in a timing-starved adder has been identified. We introduce a formal model to prove that for signal processing applications using a quadratic signal-to-noise ratio error measure, reducing bit-wise error frequency
is sub-optimal. Instead, energy-optimal approximate addition requires limiting maximum error magnitude. Intriguingly, due to possible error patterns,
this is achieved by reducing carry chains significantly below what is allowed by the timing budget for a large fraction of sum bits, using an aligned, fixed internal-carry structure for higher significance bits.
We further demonstrate that remaining approximation error is reduced by realization of conditional bounding (CB) logic for lower significance bits.
A key contribution is the formalization of an approximate CB logic synthesis problem that produces a rich space of Pareto-optimal adders with a range of quality-energy trade-offs. We show how CB logic can be customized to
result in over- and under-estimating approximate adders, and how a dithering adder that mixes them produces zero-centered error distributions, and, in
accumulation, a reduced-variance error. This work demonstrates synthesized
approximate adders with energy up to 60% smaller than that of a conventional timing-starved adder, where a 30% reduction is due to the superior
synthesis of inexact CB logic. When used in a larger system implementing an
image-processing algorithm, energy savings of 40% are possible. / text
|
759 |
Modeling and synthesis of approximate digital circuitsMiao, Jin 16 January 2015 (has links)
Energy minimization has become an ever more important concern in the design of very large scale integrated circuits (VLSI). In recent years, approximate computing, which is based on the idea of trading off computational accuracy for improved energy efficiency, has attracted significant attention. Applications that are both compute-intensive and error-tolerant are most suitable to adopt approximation strategies. This includes digital signal processing, data mining, machine learning or search algorithms. Such approximations can be achieved at several design levels, ranging from software, algorithm and architecture, down to logic or transistor levels. This dissertation investigates two research threads for the derivation of approximate digital circuits at the logic level: 1) modeling and synthesis of fundamental arithmetic building blocks; 2) automated techniques for synthesizing arbitrary approximate logic circuits under general error specifications. The first thread investigates elementary arithmetic blocks, such as adders and multipliers, which are at the core of all data processing and often consume most of the energy in a circuit. An optimal strategy is developed to reduce energy consumption in timing-starved adders under voltage over-scaling. This allows a formal demonstration that, under quadratic error measures prevalent in signal processing applications, an adder design strategy that separates the most significant bits (MSBs) from the least significant bits (LSBs) is optimal. An optimal conditional bounding (CB) logic is further proposed for the LSBs, which selectively compensates for the occurrence of errors in the MSB part. There is a rich design space of optimal adders defined by different CB solutions. The other thread considers the problem of approximate logic synthesis (ALS) in two-level form. ALS is concerned with formally synthesizing a minimum-cost approximate Boolean function, whose behavior deviates from a specified exact Boolean function in a well-constrained manner. It is established that the ALS problem un-constrained by the frequency of errors is isomorphic to a Boolean relation (BR) minimization problem, and hence can be efficiently solved by existing BR minimizers. An efficient heuristic is further developed which iteratively refines the magnitude-constrained solution to arrive at a two-level representation also satisfying error frequency constraints. To extend the two-level solution into an approach for multi-level approximate logic synthesis (MALS), Boolean network simplifications allowed by external don't cares (EXDCs) are used. The key contribution is in finding non-trivial EXDCs that can maximally approach the external BR and, when applied to the Boolean network, solve the MALS problem constrained by magnitude only. The algorithm then ensures compliance to error frequency constraints by recovering the correct outputs on the sought number of error-producing inputs while aiming to minimize the network cost increase. Experiments have demonstrated the effectiveness of the proposed techniques in deriving approximate circuits. The approximate adders can save up to 60% energy compared to exact adders for a reasonable accuracy. When used in larger systems implementing image-processing algorithms, energy savings of 40% are possible. The logic synthesis approaches generally can produce approximate Boolean functions or networks with complexity reductions ranging from 30% to 50% under small error constraints. / text
|
760 |
Control of geometry error in hp finite element (FE) simulations of electromagnetic (EM) wavesXue, Dong, 1977- 28 August 2008 (has links)
Not available / text
|
Page generated in 0.02 seconds