• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 76
  • 6
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 106
  • 106
  • 44
  • 38
  • 20
  • 19
  • 15
  • 15
  • 15
  • 14
  • 11
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Quality Evaluation in Fixed-point Systems with Selective Simulation / Evaluation de la qualité des systèmes en virgule fixe avec la simulation sélective

Nehmeh, Riham 13 June 2017 (has links)
Le temps de mise sur le marché et les coûts d’implantation sont les deux critères principaux à prendre en compte dans l'automatisation du processus de conception de systèmes numériques. Les applications de traitement du signal utilisent majoritairement l'arithmétique virgule fixe en raison de leur coût d'implantation plus faible. Ainsi, une conversion en virgule fixe est nécessaire. Cette conversion est composée de deux parties correspondant à la détermination du nombre de bits pour la partie entière et pour la partie fractionnaire. Le raffinement d'un système en virgule fixe nécessite d'optimiser la largeur des données en vue de minimiser le coût d'implantation tout en évitant les débordements et un bruit de quantification excessif. Les applications dans les domaines du traitement d'image et du signal sont tolérantes aux erreurs si leur probabilité ou leur amplitude est suffisamment faible. De nombreux travaux de recherche se concentrent sur l'optimisation de la largeur de la partie fractionnaire sous contrainte de précision. La réduction du nombre de bits pour la partie fractionnaire conduit à une erreur d'amplitude faible par rapport à celle du signal. La théorie de la perturbation peut être utilisée pour propager ces erreurs à l'intérieur des systèmes à l'exception du cas des opérations un- smooth, comme les opérations de décision, pour lesquelles une erreur faible en entrée peut conduire à une erreur importante en sortie. De même, l'optimisation de la largeur de la partie entière peut réduire significativement le coût lorsque l'application est tolérante à une faible probabilité de débordement. Les débordements conduisent à une erreur d'amplitude élevée et leur occurrence doit donc être limitée. Pour l'optimisation des largeurs des données, le défi est d'évaluer efficacement l'effet des erreurs de débordement et de décision sur la métrique de qualité associée à l'application. L'amplitude élevée de l'erreur nécessite l'utilisation d'approches basées sur la simulation pour évaluer leurs effets sur la qualité. Dans cette thèse, nous visons à accélérer le processus d'évaluation de la métrique de qualité. Nous proposons un nouveau environnement logiciel utilisant des simulations sélectives pour accélérer la simulation des effets des débordements et des erreurs de décision. Cette approche peut être appliquée à toutes les applications de traitement du signal développées en langage C. Par rapport aux approches classiques basées sur la simulation en virgule fixe, où tous les échantillons d'entrée sont traités, l'approche proposée simule l'application uniquement en cas d'erreur. En effet, les dépassements et les erreurs de décision doivent être des événements rares pour maintenir la fonctionnalité du système. Par conséquent, la simulation sélective permet de réduire considérablement le temps requis pour évaluer les métriques de qualité des applications. De plus, nous avons travaillé sur l'optimisation de la largeur de la partie entière, qui peut diminuer considérablement le coût d'implantation lorsqu'une légère dégradation de la qualité de l'application est acceptable. Nous exploitons l'environnement logiciel proposé auparavant à travers un nouvel algorithme d'optimisation de la largeur des données. La combinaison de cet algorithme et de la technique de simulation sélective permet de réduire considérablement le temps d'optimisation. / Time-to-market and implementation cost are high-priority considerations in the automation of digital hardware design. Nowadays, digital signal processing applications use fixed-point architectures due to their advantages in terms of implementation cost. Thus, floating-point to fixed-point conversion is mandatory. The conversion process consists of two parts corresponding to the determination of the integer part word-length and the fractional part world-length. The refinement of fixed-point systems requires optimizing data word -length to prevent overflows and excessive quantization noises while minimizing implementation cost. Applications in image and signal processing domains are tolerant to errors if their probability or their amplitude is small enough. Numerous research works focus on optimizing the fractional part word-length under accuracy constraint. Reducing the number of bits for the fractional part word- length leads to a small error compared to the signal amplitude. Perturbation theory can be used to propagate these errors inside the systems except for unsmooth operations, like decision operations, for which a small error at the input can leads to a high error at the output. Likewise, optimizing the integer part word-length can significantly reduce the cost when the application is tolerant to a low probability of overflow. Overflows lead to errors with high amplitude and thus their occurrence must be limited. For the word-length optimization, the challenge is to evaluate efficiently the effect of overflow and unsmooth errors on the application quality metric. The high amplitude of the error requires using simulation based-approach to evaluate their effects on the quality. In this thesis, we aim at accelerating the process of quality metric evaluation. We propose a new framework using selective simulations to accelerate the simulation of overflow and un- smooth error effects. This approach can be applied on any C based digital signal processing applications. Compared to complete fixed -point simulation based approaches, where all the input samples are processed, the proposed approach simulates the application only when an error occurs. Indeed, overflows and unsmooth errors must be rare events to maintain the system functionality. Consequently, selective simulation allows reducing significantly the time required to evaluate the application quality metric. 1 Moreover, we focus on optimizing the integer part, which can significantly decrease the implementation cost when a slight degradation of the application quality is acceptable. Indeed, many applications are tolerant to overflows if the probability of overflow occurrence is low enough. Thus, we exploit the proposed framework in a new integer word-length optimization algorithm. The combination of the optimization algorithm and the selective simulation technique allows decreasing significantly the optimization time.
42

Origin-centric techniques for optimising scalability and the fidelity of motion, interaction and rendering

Thorne, Chris January 2008 (has links)
[Truncated abstract] This research addresses endemic problems in the fields of computer graphics and simulation such as jittery motion, spatial scalability, rendering problems such as z-buffer tearing, the repeatability of physics dynamics and numerical error in positional systems. Designers of simulation and computer graphics software tend to map real world navigation rules onto the virtual world, expecting to see equivalent virtual behaviour. After all, if computers are programmed to simulate the real world, it is reasonable to expect the virtual behaviour to correspond. However, in computer simulation many behaviours and other computations show measurable problems inconsistent with realworld experience, particularly at large distances from the virtual world origin. Many of these problems, particularly in rendering, can be imperceptible, so users may be oblivious to them, but they are measurable using experimental methods. These effects, generically termed spatial jitter in this thesis, are found in this study to stem from floating point error in positional parameters such as spatial coordinates. This simulation error increases with distance from the coordinate origin and as the simulation progresses through the pipeline. The most common form of simulation error relevant to this study is spatial error which is found by this thesis to not be calculated, as may be expected, using numerical relative error propagation rules but using the rules of geometry. ... The thesis shows that the thinking behind real-world rules, such as for navigation, has to change in order to properly design for optimal fidelity simulation. Origincentric techniques, formulae, terms, architecture and processes are all presented as one holistic solution in the form of an optimised simulation pipeline. The results of analysis, experiments and case studies are used to derive a formula for relative spatial error that accounts for potential pathological cases. A formula for spatial error propagation is then derived by using the new knowledge of spatial error to extend numerical relative error propagation mathematics. Finally, analytical results are developed to provide a general mathematical expression for maximum simulation error and how it varies with distance from the origin and the number of mathematical operations performed. We conclude that the origin centric approach provides a general and optimal solution to spatial jitter. Along with changing the way one thinks about navigation, process guidelines and formulae developed in the study, the approach provides a new paradigm for positional computing. This paradigm can improve many aspects of computer simulation in areas such as entertainment, visualisation for education, industry, science, or training. Examples are: spatial scalability, the accuracy of motion, interaction and rendering; and the consistency and predictability of numerical computation in physics. This research also affords potential cost benefits through simplification of software design and code. These cost benefits come from some core techniques for minimising position dependent error, error propagation and also the simplifications and from new algorithms that flow naturally out of the core solution.
43

Voice Codec for Floating Point Processor

Ross, Johan, Engström, Hans January 2008 (has links)
<p>As part of an ongoing project at the department of electrical engineering, ISY, at Linköping University, a voice decoder using floating point formats has been the focus of this master thesis. Previous work has been done developing an mp3-decoder using the floating point formats. All is expected to be implemented on a single DSP.The ever present desire to make things smaller, more efficient and less power consuming are the main reasons for this master thesis regarding the use of a floating point format instead of the traditional integer format in a GSM codec. The idea with the low precision floating point format is to be able to reduce the size of the memory. This in turn reduces the size of the total chip area needed and also decreases the power consumption.One main question is if this can be done with the floating point format without losing too much sound quality of the speech. When using the integer format, one can represent every value in the range depending on how many bits are being used. When using a floating point format you can represent larger values using fewer bits compared to the integer format but you lose representation of some values and have to round the values off.From the tests that have been made with the decoder during this thesis, it has been found that the audible difference between the two formats is very small and can hardly be heard, if at all. The rounding seems to have very little effect on the quality of the sound and the implementation of the codec has succeeded in reproducing similar sound quality to the GSM standard decoder.</p>
44

Performance and Energy Efficient Building Blocks for Network-on-Chip Architectures

Vangal, Sriram R. January 2006 (has links)
<p>The ever shrinking size of the MOS transistors brings the promise of scalable Network-on-Chip (NoC) architectures containing hundreds of processing elements with on-chip communication, all integrated into a single die. Such a computational fabric will provide high levels of performance in an energy efficient manner. To mitigate emerging wire-delay problem and to address the need for substantial interconnect bandwidth, packet switched routers are fast replacing shared buses and dedicated wires as the interconnect fabric of choice. With on-chip communication consuming a significant portion of the chip power and area budgets, there is a compelling need for compact, low power routers. While applications dictate the choice of the compute core, the advent of multimedia applications, such as 3D graphics and signal processing, places stronger demands for self-contained, low-latency floating-point processors with increased throughput. Therefore, this work focuses on two key building blocks critical to the success of NoC design: high performance, area and energy efficient router and floating-point processor architectures.</p><p>This thesis first presents a six-port four-lane 57 GB/s non-blocking router core based on wormhole switching. The router features double-pumped crossbar channels and destinationaware channel drivers that dynamically configure based on the current packet destination. This enables 45% reduction in crossbar channel area, 23% overall router area, up to 3.8X reduction in peak channel power, and 7.2% improvement in average channel power, with no performance penalty over a published design. In a 150nm six-metal CMOS process, the 12.2mm2 router contains 1.9 million transistors and operates at 1GHz at 1.2V. We next present a new pipelined single-precision floating-point multiply accumulator core (FPMAC) featuring a single-cycle accumulate loop using base 32 and internal carry-save arithmetic, with delayed addition techniques. Combined algorithmic, logic and circuit techniques enable multiply-accumulates at speeds exceeding 3GHz, with single-cycle throughput. Unlike existing FPMAC architectures, the design eliminates scheduling restrictions between consecutive FPMAC instructions. The optimizations allow removal of the costly normalization step from the critical accumulate loop and conditionally powered down using dynamic sleep transistors on long accumulate operations, saving active and leakage power. In addition, an improved leading zero anticipator (LZA) and overflow detection logic applicable to carry-save format is presented. In a 90nm seven-metal dual-VT CMOS process, the 2mm2 custom design contains 230K transistors. The fully functional first silicon achieves 6.2 GFLOPS of performance while dissipating 1.2W at 3.1GHz, 1.3V supply.</p><p>It is clear that realization of successful NoC designs require well balanced decisions at all levels: architecture, logic, circuit and physical design. Our results from key building blocks demonstrate the feasibility of pushing the performance limits of compute cores and communication routers, while keeping active and leakage power, and area under control.</p> / Report code: LiU-TEK-LIC-2006:36.
45

Voice Codec for Floating Point Processor

Ross, Johan, Engström, Hans January 2008 (has links)
As part of an ongoing project at the department of electrical engineering, ISY, at Linköping University, a voice decoder using floating point formats has been the focus of this master thesis. Previous work has been done developing an mp3-decoder using the floating point formats. All is expected to be implemented on a single DSP.The ever present desire to make things smaller, more efficient and less power consuming are the main reasons for this master thesis regarding the use of a floating point format instead of the traditional integer format in a GSM codec. The idea with the low precision floating point format is to be able to reduce the size of the memory. This in turn reduces the size of the total chip area needed and also decreases the power consumption.One main question is if this can be done with the floating point format without losing too much sound quality of the speech. When using the integer format, one can represent every value in the range depending on how many bits are being used. When using a floating point format you can represent larger values using fewer bits compared to the integer format but you lose representation of some values and have to round the values off.From the tests that have been made with the decoder during this thesis, it has been found that the audible difference between the two formats is very small and can hardly be heard, if at all. The rounding seems to have very little effect on the quality of the sound and the implementation of the codec has succeeded in reproducing similar sound quality to the GSM standard decoder.
46

Multi-Mode Floating-Point Multiply-Add Fused Unit for Low-Power Applications

Yu, Kee-khuan 01 August 2011 (has links)
In digital signal processing and multimedia applications, floating-point(FP) multiplication and addition are the most commonly used operations. In addition, FP multiplication operations are frequently followed by the FP addition operations. Therefore, in order to achieve high performance and low cost, multiplication and addition are usually combined into a single unit, known as the FP Multiply-Add Fused (MAF). On the other hand, the mobile devices nowadays are rapidly developing. For this kind of devices, performance and power sustainability have to become the major trend in the research area. As a result, the mechanisms to reduce energy consumption become more important. Therefore, we propose a multi-mode FP MAF based on the concept of iterative multiplication and truncated addition, to achieve different operating modes with different errors. This MAF, with a total of seven modes, includes three modes for the FP multiply-accumulate operations, two modes for single FP multiplication operation and single FP addition operation, respectively. FP multiply-accumulate operations provide three modes to user, and this three modes have 0%, 0.328% and 1.107% of error. The 0% error is the same with the standard IEEE754 single-precision FP Multiply-Add Fused operations. For FP multiplication and FP addition operations, the proposed MAF allows users to choose two kinds of error modes, which are 0%, 0.328% error for FP multiplication and 0%, 0.781% error for FP addition. The 0% error is the same with the standard IEEE754 single-precision floating-point operations. When compared with the standard IEEE754 single-precision FP MAF, the proposed multi-mode FP MAF architecture has 4.5% less area and increase about 22% delay to achieve the effect of multi-mode. To demonstrate the power efficiency of proposed FP MAF, it is used to perform the operations of FP MAF, FP multiplication, and FP addition in the application of RGB to YUV format conversion. Experimental results show that, the proposed multi-mode FP MAF can significantly reduce power consumption when the modes with error are adopted.
47

Implementation of Floating Point CORDIC and its Application in 3D Computer Graphics

Wang, Po-Li 02 July 2002 (has links)
Computer graphics has become one of the important method to display information and has been applied in many applications such as CAD, medical image processing, computer animation, multimedia and virtual reality. These popular applications rely on the low-cost and real-time processing of 3D graphics which become available due to the breakthrough in the hardware design of 3D graphic engine. In this thesis, we implement a CORDIC-based floating-point processor that can compute a wide variety of arithmetic operations and show how it can be applied to the design of 3D engine.
48

Low-power fused FFT butterfly arithmetic unit with merged multiple-constant multiplier

Min, Jae Hong 21 February 2011 (has links)
Fused floating-point arithmetic units such as a floating-point fused Dot-Product (fused DP) and a floating-point fused Add-Subtract (fused AS) are employed for the implementation of the butterfly unit of the FFT due to their characteristics of low power and less area. In addition, the fused DP has less delay and lower error. Among the elements of the fused DP, two internal mantissa multipliers occupy the largest area and consume the largest power. A Multiple-Constant Multiplier (MCM) architecture has high speed, low power consumption, and small area compared to a conventional multiplier. The MCM is used for the internal mantissa multiplier, providing a solution for low power and high performance. Despite the benefits of the MCM, it lacks precision compared to a conventional multiplier. Due to this, the butterfly unit using the MCM has higher error. In this report, a new architecture of the butterfly unit has been designed by merging conventional MCMs. The new architecture provides two options. It either reduces the error or it lowers the power compared to a conventional MCM butterfly unit. / text
49

Dvigubo tikslumo slankaus kablelio daugybos realizavimas ir tyrimas / Double precision floating point multiplication an research

Lešinskytė, Vaida 02 September 2011 (has links)
Darbe analizuojamos slankaus kablelio daugybos apskaičiavimo problemos. Pirmame darbo skyriuje analizuojamas slankaus kablelio vertimas iš dešimtainės sistemos į dvejetainę sistemą ir atvirkščiai. Tai reikalinga atlikti tam, kad išryškėtų nagrinėjamos problemos aktualumas. Jau verčiant dešimtainį didelio tikslumo skaičių į dvejetainį ir atgal į dešimtainį, nukenčia jo tikslumas, o dar labiau tai išryškėja vykdant veiksmus. Taip pat pirmame skyriuje paskutiniame poskyryje pateikiau keletą istorinių katastrofų faktų, kurie kilo būtent dėl slankiojo kablelio tikslumo problemų. Antrame darbo skyriuje analizuojamos aparatūrinės ir programinės įrangos problemos, kylančios realizuojant slankiojo kablelio daugybą. Šiame skyriuje gvildenta aparatūrinės įrangos įtaka algoritmų spartai, taip pat programinės įrangos įtaka gautam rezultato tikslumui ir apskaičiavimo spartai. Šiame skyriuje taip pat aprašyti ir keli slankiojo kablelio realizacijos algoritmai. Trečiame skyriuje pateikti algoritmo realizacijos reikalavimai, aprašytos pagrindinės problemos, su kuriomis buvo susidurta vykdant algoritmo realizaciją. Apibendrinti pateikiami ir rezultatai. Darbo gale pateikiamos darbo išvados, o prieduose pateikta algoritmo realizacija – programinio kodo fragmentai ir jų komentarai. / The paper analyzes the calculation of floating-point multiplication problems. The first chapter analyzes the translation of the floating-point decimal system to binary system and vice versa. It is necessary to make certain that the issue is clearly brought problem. High-precision decimal numbers translating into binary and back to decimal numbers. Even after these actions we have a loss of accuracy, and even more so come on in action. It is also the first chapter of the last section I presented some of the historical catastrophe of the facts that have arisen precisely because of the floating point precision problems. The second chapter analyzes the hardware and software problems for the realization of floating-point multiplication. This chapter examined the influence of algorithms hardware speed, as well as a software power and accuracy of the results obtained in calculating the rates. It also includes a description and a few floating-point realization algorithms. The third section of the Algorithm Implementation Requirements, described the main problems encountered in the realization of the algorithm. And summarize the results. Work at the end of the conclusions and annexes of the algorithm implementation - programming code snippets and comments.
50

Digital control networks for virtual creatures

Bainbridge, Christopher James January 2010 (has links)
Robot control systems evolved with genetic algorithms traditionally take the form of floating-point neural network models. This thesis proposes that digital control systems, such as quantised neural networks and logical networks, may also be used for the task of robot control. The inspiration for this is the observation that the dynamics of discrete networks may contain cyclic attractors which generate rhythmic behaviour, and that rhythmic behaviour underlies the central pattern generators which drive lowlevel motor activity in the biological world. To investigate this a series of experiments were carried out in a simulated physically realistic 3D world. The performance of evolved controllers was evaluated on two well known control tasks—pole balancing, and locomotion of evolved morphologies. The performance of evolved digital controllers was compared to evolved floating-point neural networks. The results show that the digital implementations are competitive with floating-point designs on both of the benchmark problems. In addition, the first reported evolution from scratch of a biped walker is presented, demonstrating that when all parameters are left open to evolutionary optimisation complex behaviour can result from simple components.

Page generated in 0.0748 seconds