• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 76
  • 6
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 106
  • 106
  • 44
  • 38
  • 20
  • 19
  • 15
  • 15
  • 15
  • 14
  • 11
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Simulink <sup>TM</sup>modules that emulate digital controllers realized with fixed-point or floating-point arithmetic

Robe, Edward D. January 1994 (has links)
No description available.
32

A Custom Computing Machine Solution for Simulation of Discretized Domain Physical Systems

Paar, Kevin J. 05 June 1996 (has links)
This thesis describes the implementation of a two-dimensional heat transfer simulation system using a Splash-2 Custom Computing Machine (CCM). This application was implemented as a proof of concept for utilizing CCMs in the simulation of physical systems. This paper discusses physical systems simulation and the need for discretizing the domain of such systems, along with the techniques used for mathematical simulation. Also discussed is the nature of CCMs, and why they are well suited to this application. A detailed description of the approach and implementation is included to full document the design, along with an analysis of the performance of the resulting system. / Master of Science
33

Improved architectures for fused floating-point arithmetic units

Sohn, Jongwook 05 November 2013 (has links)
Most general purpose processors (GPP) and application specific processors (ASP) use the floating-point arithmetic due to its wide and precise number system. However, the floating-point operations require complex processes such as alignment, normalization and rounding. To reduce the overhead, fused floating-point arithmetic units are introduced. In this dissertation, improved architectures for three fused floating-point arithmetic units are proposed: 1) Fused floating-point add-subtract unit, 2) Fused floating-point two-term dot product unit, and 3) Fused floating-point three-term adder. Also, the three fused floating-point units are implemented for both single and double precision and evaluated in terms of the area, power consumption, latency and throughput. To improve the performance of the fused floating-point add-subtract unit, a new alignment scheme, fast rounding, two dual-path algorithms and pipelining are applied. The improved fused floating-point two-term dot product unit applies several optimizations: a new alignment scheme, early normalization and fast rounding, four-input leading zero anticipation (LZA), dual-path algorithm and pipelining. The proposed fused floating-point three-term adder applies a new exponent compare and significand alignment scheme, double reduction, early normalization and fast rounding, three-input LZA and pipelining to improve the performance. / text
34

Investigation of 8-bit Floating-Point Formats for Machine Learning

Lindberg, Theodor January 2023 (has links)
Applying machine learning to various applications has gained significant momentum in recent years. However, the increasing complexity of networks introduces challenges such as a larger memory footprint and decreased throughput. This thesis aims to address these challenges by exploring the use of 8-bit floating-point numbers for machine learning. The numerical accuracy was evaluated empirically by implementing software models of the arithmetic and running experiments on a neural network provided by MediaTek. While the initial findings revealed poor accuracy when performing computations solely with 8-bit floating-point arithmetic, a significant improvement could be achieved by using a higher-precision accumulator register. The hardware cost was evaluated using a synthesis tool by measuring the increase in silicon area and impact on clock frequency after four new vector instructions had been implemented. A large increase in area was measured for the functional blocks, but the hardware cost for interconnect and instruction decoding were negligible. A slight decrease in system clock frequency was observed, although marginally. Ideas that likely could improve the accuracy of inference calculations and decrease the hardware cost are proposed in the section for future work.
35

Decimal Floating-point Fused Multiply Add with Redundant Number Systems

2013 May 1900 (has links)
The IEEE standard of decimal floating-point arithmetic was officially released in 2008. The new decimal floating-point (DFP) format and arithmetic can be applied to remedy the conversion error caused by representing decimal floating-point numbers in binary floating-point format and to improve the computing performance of the decimal processing in commercial and financial applications. Nowadays, many architectures and algorithms of individual arithmetic functions for decimal floating-point numbers are proposed and investigated (e.g., addition, multiplication, division, and square root). However, because of the less efficiency of representing decimal number in binary devices, the area consumption and performance of the DFP arithmetic units are not comparable with the binary counterparts. IBM proposed a binary fused multiply-add (FMA) function in the POWER series of processors in order to improve the performance of floating-point computations and to reduce the complexity of hardware design in reduced instruction set computing (RISC) systems. Such an instruction also has been approved to be suitable for efficiently implementing not only stand-alone addition and multiplication, but also division, square root, and other transcendental functions. Additionally, unconventional number systems including digit sets and encodings have displayed advantages on performance and area efficiency in many applications of computer arithmetic. In this research, by analyzing the typical binary floating-point FMA designs and the design strategy of unconventional number systems, ``a high performance decimal floating-point fused multiply-add (DFMA) with redundant internal encodings" was proposed. First, the fixed-point components inside the DFMA (i.e., addition and multiplication) were studied and investigated as the basis of the FMA architecture. The specific number systems were also applied to improve the basic decimal fixed-point arithmetic. The superiority of redundant number systems in stand-alone decimal fixed-point addition and multiplication has been proved by the synthesis results. Afterwards, a new DFMA architecture which exploits the specific redundant internal operands was proposed. Overall, the specific number system improved, not only the efficiency of the fixed-point addition and multiplication inside the FMA, but also the architecture and algorithms to build up the FMA itself. The functional division, square root, reciprocal, reciprocal square root, and many other functions, which exploit the Newton's or other similar methods, can benefit from the proposed DFMA architecture. With few necessary on-chip memory devices (e.g., Look-up tables) or even only software routines, these functions can be implemented on the basis of the hardwired FMA function. Therefore, the proposed DFMA can be implemented on chip solely as a key component to reduce the hardware cost. Additionally, our research on the decimal arithmetic with unconventional number systems expands the way of performing other high-performance decimal arithmetic (e.g., stand-alone division and square root) upon the basic binary devices (i.e., AND gate, OR gate, and binary full adder). The proposed techniques are also expected to be helpful to other non-binary based applications.
36

An Implementation of the IEEE Standard for Binary Floating-Point Arithmetic for the Motorola 6809 Microprocessor

Rosenblum, David Samuel 08 1900 (has links)
This thesis describes a software implementation of the IEEE Floating-Point Standard (IEEE Task P754), which is believed to be an effective system for reliable, accurate computer arithmetic. The standard is implemented as a set of procedures written in Motorola 6809 assembly language. Source listings of the procedures are contained in appendices.
37

Résolution de contraintes sur les flottants dédiée à la vérification de programmes / Constraint solver over floating-point numbers designed for program verification

Belaid, Mohammed 04 December 2013 (has links)
La vérification de programmes avec des calculs sur les nombres à virgule flottante est une étape très importante dans le développement de logiciels critiques. Les calculs sur les nombres flottants sont généralement imprécis, et peuvent dans certains cas diverger par rapport au résultat attendu sur les nombres réels. L’objectif de cette thèse est de concevoir un solveur de contraintes sur les nombres à virgule flottante dédié à la vérification de programmes. Nous présentons dans ce manuscrit une nouvelle méthode de résolution de contraintes sur les flottants. Cette méthode se base principalement sur la sur-approximation des contraintes sur les flottants par des contraintes sur les réels. Cette sur-approximation doit être conservative des solutions sur les flottants. Les contraintes obtenues sont ensuite résolues par un solveur de contraintes sur les réels. Nous avons proposé un algorithme de filtrage des domaines sur les flottants basé sur le concept de la sur-approximation qui utilise des techniques de programmation linéaire. Nous avons aussi proposé une méthode de recherche de solutions basée sur des heuristiques. Cette méthode offre aussi la possibilité de comparer le comportement des programmes par rapport à une spécification sur les réels. Ces méthodes ont été implémentées et expérimentées sur un ensemble de programmes avec du calcul sur les nombres flottants. / The verification of programs with floating-point numbers computation is an important issue in the development of critical software systems. Computations over floating-point numbers are not accurate, and the results may be very different from the expected results over real numbers. The aim of this thesis is to design a constraint solver over floating-point numbers for program verification purposes. We introduce a new method for solving constraints over floating-point numbers. This method is based on an over-approximation of floating-point constraints using constraints over real numbers. This overapproximation is safe, that’s to say it doesn’t loose any solution over the floats. The generated constraints are then solved with a constraint solver over real numbers. We propose a new filtering algorithm using linear programming techniques, which takes advantage of these over-approximations of floating-point constraints. We introduce also new search methods and heuristics to find floating-point solutions of these constraints. Using our implementation, we show on a set of counter-examples the difference of the execution of programs over the floats with the specification over real numbers.
38

Performance and Energy Efficient Building Blocks for Network-on-Chip Architectures

Vangal, Sriram R. January 2006 (has links)
The ever shrinking size of the MOS transistors brings the promise of scalable Network-on-Chip (NoC) architectures containing hundreds of processing elements with on-chip communication, all integrated into a single die. Such a computational fabric will provide high levels of performance in an energy efficient manner. To mitigate emerging wire-delay problem and to address the need for substantial interconnect bandwidth, packet switched routers are fast replacing shared buses and dedicated wires as the interconnect fabric of choice. With on-chip communication consuming a significant portion of the chip power and area budgets, there is a compelling need for compact, low power routers. While applications dictate the choice of the compute core, the advent of multimedia applications, such as 3D graphics and signal processing, places stronger demands for self-contained, low-latency floating-point processors with increased throughput. Therefore, this work focuses on two key building blocks critical to the success of NoC design: high performance, area and energy efficient router and floating-point processor architectures. This thesis first presents a six-port four-lane 57 GB/s non-blocking router core based on wormhole switching. The router features double-pumped crossbar channels and destinationaware channel drivers that dynamically configure based on the current packet destination. This enables 45% reduction in crossbar channel area, 23% overall router area, up to 3.8X reduction in peak channel power, and 7.2% improvement in average channel power, with no performance penalty over a published design. In a 150nm six-metal CMOS process, the 12.2mm2 router contains 1.9 million transistors and operates at 1GHz at 1.2V. We next present a new pipelined single-precision floating-point multiply accumulator core (FPMAC) featuring a single-cycle accumulate loop using base 32 and internal carry-save arithmetic, with delayed addition techniques. Combined algorithmic, logic and circuit techniques enable multiply-accumulates at speeds exceeding 3GHz, with single-cycle throughput. Unlike existing FPMAC architectures, the design eliminates scheduling restrictions between consecutive FPMAC instructions. The optimizations allow removal of the costly normalization step from the critical accumulate loop and conditionally powered down using dynamic sleep transistors on long accumulate operations, saving active and leakage power. In addition, an improved leading zero anticipator (LZA) and overflow detection logic applicable to carry-save format is presented. In a 90nm seven-metal dual-VT CMOS process, the 2mm2 custom design contains 230K transistors. The fully functional first silicon achieves 6.2 GFLOPS of performance while dissipating 1.2W at 3.1GHz, 1.3V supply. It is clear that realization of successful NoC designs require well balanced decisions at all levels: architecture, logic, circuit and physical design. Our results from key building blocks demonstrate the feasibility of pushing the performance limits of compute cores and communication routers, while keeping active and leakage power, and area under control. / Report code: LiU-TEK-LIC-2006:36.
39

Towards reproducible, accurately rounded and efficient BLAS

Chohra, Chemseddine 10 March 2017 (has links)
Le problème de non-reproductibilté numérique surgit dans les calculs parallèles principalement à cause de la non-associativité de l’addition flottante. Les environnements parallèles changent dynamiquement l’ordre des opérations. Par conséquent, les résultats numériques peuvent changer d’une exécution à une autre. Nous garantissons la reproductibilité en étendant autantque possible l’arrondi correct à des séquences de calculs plus importantes que les opérations arithmétique exigées par le standard IEEE-754. Nous introduisons RARE-BLAS une implémentation des BLAS qui est reproductible et précise en utilisant les transformations sans erreur et les algorithmes de sommation appropriés. Nous présentons dans cette thèsedes solutions pour le premier (asum, dot and nrm2) et le deuxième (gemv and trsv) niveaux des BLAS. Nous développons une implémentation de ces solutions qui utilise les interfaces de programmation parallèles (OpenMP et MPI) et les jeu d’instructions vectorielles. Nous comparons l’efficacité de RARE-BLAS à une bibliothèque optimisé (Intel MKL) et à des solutionsreproductibles existantes. / Numerical reproducibility failures rise in parallel computation because floating-point summation is non-associative. Massively parallel systems dynamically modify the order of floating-point operations. Hence, numerical results might change from one run to another. We propose to ensure reproducibility by extending as far as possible the IEEE-754 correct rounding property to larger computing sequences. We introduce RARE-BLAS a reproducible and accurate BLAS library that benefits from recent accurate and efficient summation algorithms. Solutions for level 1 (asum, dot and nrm2) and level 2 (gemv and trsv) routines are designed. Implementations relying on parallel programming API (OpenMP, MPI) and SIMD extensions areproposed. Their efficiency is studied compared to optimized library (Intel MKL) and other existing reproducible algorithms.
40

Lazy exact real arithmetic using floating point operations

McCleeary, Ryan 01 August 2019 (has links)
Exact real arithmetic systems can specify any amount of precision on the output of the computations. They are used in a wide variety of applications when a high degree of precision is necessary. Some of these applications include: differential equation solvers, linear equation solvers, large scale mathematical models, and SMT solvers. This dissertation proposes a new exact real arithmetic system which uses lazy list of floating point numbers to represent the real numbers. It proposes algorithms for basic arithmetic computations on these structures and proves their correctness. This proposed system has the advantage of algorithms which can be supported by modern floating point hardware, while still being a lazy exact real arithmetic system.

Page generated in 0.0716 seconds