• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 2
  • 1
  • Tagged with
  • 22
  • 22
  • 8
  • 7
  • 7
  • 7
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Some theoretical and computational aspects of approximation theory

Carpenter, A. J. January 1988 (has links)
No description available.
2

Design and Analysis of Table-based Arithmetic Units with Memory Reduction

Chen, Kun-Chih 01 September 2009 (has links)
In many digital signal processing applications, we often need some special function units which can compute complicated arithmetic functions such as reciprocal and logarithm. Conventionally, table-based arithmetic design strategy uses lookup tables to implement these kinds of function units. However, the table size will increase exponentially with respect to the required precision. In this thesis, we propose two methods to reduce the table size: bottom-up non-uniform segmentation and the approach which merges uniform piecewise interpolation and Newton-Raphson method. Experimental results show that we obtain significant table sizes reduction in most cases.
3

Design of a Table-Driven Function Evaluation Generator Using Bit-Level Truncation Methods

Tseng, Yu-ling 30 August 2011 (has links)
Functional evaluation is one of key arithmetic operations in many applications including 3D graphics and stereo. Among various designs of hardware-based function evaluators, piecewise polynomial approximation methods are the most popular which interpolate the piecewise function curve in a sub-interval using polynomials with polynomial coefficients of each sub-interval stored in an entry of a ROM. The conventional piecewise methods usually determine the bit-widths of each ROM entry and multipliers and adders by analyzing the various error sources, including polynomial approximation errors, coefficient quantization errors, truncation errors of arithmetic operations, and the final rounding error. In this thesis, we present a new piecewise function evaluation design by considering all the error sources together. By combining all the error sources during the approximation, quantization, truncation and rounding, we can efficiently reduce the area cost of ROM and the corresponding arithmetic units. The proposed method is applied to piecewise function evaluators of both uniform and non-uniform segmentation.
4

Table Based Design for Function Evaluation and Error Correcting Codes

Wen, Chia-Sheng 23 July 2012 (has links)
Lookup-table (LUT)-based method is a common approach used in all kinds of research topics. In this dissertation, we present several new designs for table-based function evaluation and table-based error correcting coding. In Chapter 3, a new function evaluation method, called two-level approximation, is presented where piecewise degree-one polynomials are used for initial approximation in the first level, followed by the refined approximation for the shared normalized difference functions in the second level. In Chapter 4, we present a new non-uniform segmentation method that searches for the optimal segmentation scheme with the different design goals of minimizing either ROM, total area, or delay. In Chapter 5, a new design methodology for table-based function evaluation is presented. Unlike previous approaches that usually determine the bit widths by assigning allowable errors for individual hardware components, the total error budget of our new design is considered jointly in order to optimized the bit widths of all the hardware components, leading to significant improvements in both area and delay. Finally, in Chapter 6, the similar table-based concept is used in the design of error correcting encoder using the modified polynomial of the Lagrange interpolation formula, resulting in smaller critical path delay and lower power consumption.
5

Design of a CORDIC Function Generator Using Table-Driven Function Evaluation with Bit-Level Truncation

Hsu, Wei-Cheng 10 September 2012 (has links)
Functional evaluation is one of key arithmetic operations in many applications including 3D graphics and stereo. Among various designs of hardware-based function evaluation methods, piecewise polynomial approximation is the most popular approach which interpolates the piecewise function curve in a sub-interval using polynomials with polynomial coefficients of each sub-interval stored in an entry of a lookup table ROM. The conventional piecewise methods usually determine the bit-widths of each ROM entry, multipliers, and adders by analyzing the various error sources, including polynomial approximation errors, coefficient quantization errors, truncation errors of arithmetic operations, and the final rounding error. In this thesis, we present a new piecewise function evaluation design by considering all the error sources together. By combining all the error sources during the approximation, quantization, truncation and rounding, we can efficiently reduce the area cost of ROM and the corresponding arithmetic units in the design of CORDIC processors.
6

Improved Bit-Level Truncation with Joint Error Analysis for Table-Based Function Evaluation

Lin, Shin-hung 12 September 2012 (has links)
Function evaluation is often used in many science and engineering applications. In order to reduce the computation time, different hardware implementations have been proposed to accelerate the speed of function evaluation. Table-based piecewise polynomial approximation is one of the major methods used in hardware function evaluation designs that require simple hardware components to achieve desired precision. Piecewise polynomial method approximates the original function values in each partitioned subinterval using low-degree polynomials with coefficients stored in look-up tables. Errors are introduced in the hardware implementations. Conventional error analysis in piecewise polynomial methods includes four types of error sources: polynomial approximation error, coefficient quantization error, arithmetic truncation error, and final rounding error. Typical design approach is to pre-allocated maximum allowable error budget for each individual hardware component so that the total error induced from these individual errors satisfies the bit accuracy. In this thesis, we present a new design approach by jointly considering the error sources in designing all the hardware components, including look-up tables and arithmetic units, so that the total area cost is reduced compared to the previously published designs.
7

Implementation of Elementary Functions for a Fixed Point SIMD DSP Coprocessor

Tomasson, Orri January 2010 (has links)
This thesis is about implementing the functions for reciprocal, square root, inverse square root and logarithms on a DSP platform. A multi-core DSP platform that consists of one master processor core and several SIMD coprocessor cores is currently being designed by a team at the Computer Engineering Department of Linköping University. The SIMD coprocessors’ arithmetic logic unit (ALU) has 16 multipliers to support vector multiplication instructions. By efficiently using the 16 multipliers, it is possible to evaluate polynomials very fast. The ALU does not have (hardware) support for floating point arithmetic, so the challenge is to get good precision by using fixed point arithmetic. Precise and fast solutions to implement the mathematical functions are found by converting the fixed point input to a soft floating point format before polynomial approximation, choosing a polynomial based on an error analysis of the polynomial approximation, and using Newton-Raphson or Goldschmidt iterations to improve the precision of the polynomial approximations. Finally, suggestions are made of changes and additions to the instruction set architecture, in order to make the implementations faster, by efficiently using the currently existing hardware.
8

Speeding Up and Quantifying Approximation Error in Continuum Quantum Monte Carlo Solid-State Calculations

Parker, William David 01 November 2010 (has links)
No description available.
9

Time-domain Response of Linear Hysteretic Systems to Deterministic and Random Excitations.

Muscolino, G., Palmeri, Alessandro, Ricciardelli, F. January 2005 (has links)
No / The causal and physically realizable Biot hysteretic model proves to be the simplest linear model able to describe the nearly rate-independent behaviour of engineering materials. In this paper, the performance of the Biot hysteretic model is analysed and compared with those of the ideal and causal hysteretic models. The Laguerre polynomial approximation (LPA) method, recently proposed for the time-domain analysis of linear viscoelastic systems, is then summarized and applied to the prediction of the dynamic response of linear hysteretic systems to deterministic and random excitations. The parameters of the LPA model generally need to be computed through numerical integrals; however, when this model is used to approximate the Biot hysteretic model, closed-form expressions can be found. Effective step-by-step procedures are also provided in the paper, which prove to be accurate also for high levels of damping. Finally, the method is applied to the dynamic analysis of a highway embankment excited by deterministic and random ground motions. The results show that in some cases the inaccuracy associated with the use of an equivalent viscous damping model is too large.
10

High performance computing for the discontinuous Galerkin methods

Mukhamedov, Farukh January 2018 (has links)
Discontinuous Galerkin methods form a class of numerical methods to find a solution of partial differential equations by combining features of finite element and finite volume methods. Methods are defined using a weak form of a particular model problem, allowing for discontinuities in the discrete trial and test spaces. Using a discontinuous discrete space mesh provides proper flexibility and a compact discretisation pattern, allowing a multidomain and multiphysics simulation. Discontinuous Galerkin methods with a higher approximation polynomial order, the socalled p-version, performs better in terms of convergence rate, compared with the low order h-version with smaller element sizes and bigger mesh. However, the condition number of the Galerkin system grows subsequently. This causes surge in the amount of required storage, computational complexity and in the time required for computation. We use the following three approaches to keep the advantages and eliminate the disadvantages. The first approach will be a specific choice of basis functions which we call C1 polynomials. These ensure that the majority of integrals over the edge of the mesh elements disappears. This reduces the total number of non-zero elements in the resulting system. This decreases the computational complexity without loss in precision. This approach does not affect the number of iterations required by chosen Conjugate Gradients method when compared to the other choice of basis functions. It actually decreases the total number of algebraic operations performed. The second approach is the introduction of suitable preconditioners. In our case, the Additive two-layer Schwarz method, developed in [4], for the iterative Conjugate Gradients method is considered. This directly affects the spectral condition number of the system matrix and decreases the number of iterations required for the computation. This approach, however, increases the total number of algebraic operations and might require more operational time. To tackle the rise in the number of algebraic operations, we introduced a modified Additive two-layer non-overlapping Schwarz method with a Multigrid process. This using a fixed low-order approximation polynomial degree on a coarse grid. We show that this approach is spectrally equivalent to the first preconditioner, and requires less time for computation. The third approach is a development of an efficient mathematical framework for distributed data structure. This allows a high performance, massively parallel, implementation of the discontinuous Galerkin method. We demonstrate that it is possible to exploit properties of the system matrix and C1 polynomials as basis functions to optimize the parallel structures. The previously mentioned parallel data structure allows us to parallelize at the same time both the matrix-vector multiplication routines for the Conjugate Gradients method, as well as the preconditioner routines on the solver level. This minimizes the transfer ratio amongst the distributed system. Finally, we combined all three approaches and created a framework, which allowed us to successfully implement all of the above.

Page generated in 0.199 seconds