Spelling suggestions: "subject:"mixed point arithmetic"" "subject:"fixed point arithmetic""
1 |
Simulink <sup>TM</sup>modules that emulate digital controllers realized with fixed-point or floating-point arithmeticRobe, Edward D. January 1994 (has links)
No description available.
|
2 |
Análise do efeito da precisão finita no algoritmo adaptativo sigmoidal / Analysis of the effect of finite precision on the sigmoidal adaptive algorithmFonseca, José de Ribamar Silva 16 February 2017 (has links)
Submitted by Rosivalda Pereira (mrs.pereira@ufma.br) on 2017-07-18T17:58:49Z
No. of bitstreams: 1
JoseRibamarFonseca.pdf: 2069580 bytes, checksum: 26f5e4becf41e81d4359f2bc5df171fa (MD5) / Made available in DSpace on 2017-07-18T17:58:49Z (GMT). No. of bitstreams: 1
JoseRibamarFonseca.pdf: 2069580 bytes, checksum: 26f5e4becf41e81d4359f2bc5df171fa (MD5)
Previous issue date: 2017-02-16 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPQ) / The adaptive filtering is currently an important tool in the statistical processing
of signals, especially when it is necessary to process signals from environments with
unknown statistics varying with time. The adaptive filtering study was driven by the development
of the Least Mean Square algorithm (LMS) in 1960. Since then other adaptive
algorithms have come up with a better performance than LMS algorithm with respect to
misadjustment and convergence rate. Among them, the Sigmoidal algorithm (SA) which
showed superior to the LMS, for the convergence rate and the mismatch in their implementations
infinite precision. In hardware devices such as DSPs, microcontrollers and
FPGAs, adaptive algorithms are implemented in finite precision, in general, fixed point
arithmetic. When the adaptive filters are implemented in finite precision some effects
can affect their performance. Ultimately lead to divergence due to quantization errors
specified in the approximation process of the variables involved in the adaptive processing
of their original values. Thus, this article aims to analyze the performance of the adaptive
algorithm Sigmoidal (SA) in finite precision when implemented using fixed-point arithmetic.
In particular, the analysis of its performance curve and mismatch, comparing them
in different word lengths (number of bits). The results presented in this article proposes
a series of Taylor Ln gradient of cost function (cosh αe) algorithm SA for implementation
in finite precision. We analyze its performance curve for different lengths of words. It
shows that the algorithm is stable in its performance compared to convergence to different
lengths of words, and that the increase in mismatch level at steady state is sensitive or
afected by the quantization of the variables involved in the calculations of this algorithm. / A filtragem adaptativa constitui atualmente uma ferramenta importante no
processamento estatístico de sinais, especialmente quando é necessário processar sinais
provenientes de ambientes com estatísticas desconhecidas que variam com o tempo. O estudo
de filtragem adaptativa foi impulsionado com o desenvolvimento do algoritmo Least
Mean Square (LMS) em 1960. Desde então outros algoritmos adaptativos têm surgido
com um desempenho superior ao algoritmo LMS em relação ao desajuste e à taxa de
convergência. Entre eles, o algoritmo Sigmoidal (SA) que se apresentou superior ao LMS,
em relação a taxa de convergência e o desajuste em suas implementações na forma analógica.
Nos dispositivos de hardware, tais como DSPs, Microcontroladores e FPGAs, os
algoritmos adaptativos são implementados na forma digital, onde a precisão é finita, em
geral, com aritmética de ponto fixo. Quando os filtros adaptativos são implementados
em precisão finita alguns efeitos podem afetar o seu desempenho. Em última análise,
levar à divergência devido aos erros de quantização especificados no processo de aproximação
dos valores das variáveis envolvidas no processamento adaptativo de seus valores
originais. Assim, este trabalho propõe analisar o desempenho do algoritmo adaptativo
Sigmoidal (SA) em precisão nita, quando implementado utilizando aritmética de ponto
xo. Em particular, a análise de sua curva de desempenho e o desajuste, comparando-os
em diferentes comprimentos de palavras (número de bits). Os resultados apresentados
neste trabalho propõe uma aproximação em série de Taylor do gradiente da função de
custo Ln(cosh αe) do algoritmo SA para implementação em precisão finita. Analisamos
a sua curva de desempenho para diferentes comprimentos de palavras. Mostra-se que
o algoritmo apresenta estabilidade em seu desempenho em relação à convergência, para
diferentes comprimentos de palavras, e que o aumento no nível do desajuste em estado estacionário
é sensível ou influenciado pela quantização dos valores das variáveis envolvidas
nos cálculos desse algoritmo.
|
3 |
Implementation of Elementary Functions for a Fixed Point SIMD DSP CoprocessorTomasson, Orri January 2010 (has links)
This thesis is about implementing the functions for reciprocal, square root, inverse square root and logarithms on a DSP platform. A multi-core DSP platform that consists of one master processor core and several SIMD coprocessor cores is currently being designed by a team at the Computer Engineering Department of Linköping University. The SIMD coprocessors’ arithmetic logic unit (ALU) has 16 multipliers to support vector multiplication instructions. By efficiently using the 16 multipliers, it is possible to evaluate polynomials very fast. The ALU does not have (hardware) support for floating point arithmetic, so the challenge is to get good precision by using fixed point arithmetic. Precise and fast solutions to implement the mathematical functions are found by converting the fixed point input to a soft floating point format before polynomial approximation, choosing a polynomial based on an error analysis of the polynomial approximation, and using Newton-Raphson or Goldschmidt iterations to improve the precision of the polynomial approximations. Finally, suggestions are made of changes and additions to the instruction set architecture, in order to make the implementations faster, by efficiently using the currently existing hardware.
|
4 |
Digital Δ-Σ Modulation:variable modulus and tonal behaviour in a fixed-point digital environmentBorkowski, M. (Maciej) 28 October 2008 (has links)
Abstract
Digital delta-sigma modulators are used in a broad range of modern electronic sub-systems, including oversampled digital-to-analogue converters, class-D amplifiers and fractional-N frequency synthesizers.
This work addresses a well known problem of unwanted spurious tones in the modulator’s output spectrum. When a delta-sigma modulator works with a constant input, the output signal can be periodic, where short periods lead to strong deterministic tones. In this work we propose means for guaranteeing that the output period will never be shorter than a prescribed minimum value for all constant inputs. This allows a relationship to be formulated between the modulator’s bus width and the spurious-free range, thereby making it possible to trade output spectrum quality for hardware consumption.
The second problem addressed in this thesis is related to the finite accuracy of frequencies generated in delta-sigma fractional-N frequency synthesis. The synthesized frequencies are usually approximated with an accuracy that is dependent on the modulator’s bus width. We propose a solution which allows frequencies to be generated exactly and removes the problem of a constant phase drift. This solution, which is applicable to a broad range of digital delta-sigma modulator architectures, replaces the traditionally used truncation quantizer with a variable modulus quantizer. The modulus, provided by a separate input, defines the denominator of the rational output mean.
The thesis concludes with a practical example of a delta-sigma modulator used in a fractional-N frequency synthesizer designed to meet the strict accuracy requirements of a GSM base station transceiver. Here we optimize and compare a traditional modulator and a variable modulus design in order to minimize hardware consumption. The example illustrates the use made of the relationship between the spurious-free range and the modulator’s bus width, and the practical use of the variable modulus functionality.
|
5 |
Fixed-Point Image Orthorectification Algorithms for Reduced Computational CostFrench, Joseph Clinton 17 May 2016 (has links)
No description available.
|
6 |
Synthesis of certified programs in fixed-point arithmetic, and its application to linear algebra basic blocks : and its application to linear algebra basic blocksNajahi, Mohamed amine 10 December 2014 (has links)
Pour réduire les coûts des systèmes embarqués, ces derniers sont livrés avec des micro-processeurs peu puissants. Ces processeurs sont dédiés à l'exécution de tâches calculatoires dont certaines, comme la transformée de Fourier rapide, peuvent s'avérer exigeantes en termes de ressources de calcul. Afin que les implémentations de ces algorithmes soient efficaces, les programmeurs utilisent l'arithmétique à virgule fixe qui est plus adaptée aux processeurs dépourvus d'unité flottante. Cependant, ils se retrouvent confrontés à deux difficultés: D'abord, coder en virgule fixe est fastidieux et exige que le programmeur gère tous les détails arithmétiques. Ensuite, et en raison de la faible dynamique des nombres à virgule fixe par rapport aux nombres flottants, les calculs en fixe sont souvent perçus comme intrinsèquement peu précis. La première partie de cette thèse propose une méthodologie pour dépasser ces deux limitations. Elle montre comment concevoir et mettre en œuvre des outils pour générer automatiquement des programmes en virgule fixe. Ensuite, afin de rassurer l'utilisateur quant à la qualité numérique des codes synthétisés, des certificats sont générés qui fournissent des bornes sur les erreurs d'arrondi. La deuxième partie de cette thèse est dédiée à l'étude des compromis lors de la génération de programmes en virgule fixe pour les briques d'algèbre linéaire. Des données expérimentales y sont fournies sur la synthèse de code pour la multiplication et l'inversion matricielles. / To be cost effective, embedded systems are shipped with low-end micro-processors. These processors are dedicated to one or few tasks that are highly demanding on computational resources. Examples of widely deployed tasks include the fast Fourier transform, convolutions, and digital filters. For these tasks to run efficiently, embedded systems programmers favor fixed-point arithmetic over the standardized and costly floating-point arithmetic. However, they are faced with two difficulties: First, writing fixed-point codes is tedious and requires that the programmer must be in charge of every arithmetical detail. Second, because of the low dynamic range of fixed-point numbers compared to floating-point numbers, there is a persistent belief that fixed-point computations are inherently inaccurate. The first part of this thesis addresses these two limitations as follows: It shows how to design and implement tools to automatically synthesize fixed-point programs. Next, to strengthen the user's confidence in the synthesized codes, analytic methods are suggested to generate certificates. These certificates can be checked using a formal verification tool, and assert that the rounding errors of the generated codes are indeed below a given threshold. The second part of the thesis is a study of the trade-offs involved when generating fixed-point code for linear algebra basic blocks. It gives experimental data on fixed-point synthesis for matrix multiplication and matrix inversion through Cholesky decomposition.
|
7 |
Methods to evaluate accuracy-energy trade-off in operator-level approximate computing / Méthodes d'évaluation du compromis précision-énergie pour le calcul approximatif niveau opérateurBarrois, Benjamin 11 December 2017 (has links)
Les limites physiques des circuits à base de silicium étant en passe d'être atteintes, de nouveaux moyens doivent être trouvés pour outrepasser la fin de la loi de Moore. Beaucoup d'applications peuvent tolérer des approximations dans leurs calculs à différents niveaux, sans dégrader la qualité de leur sortie, ou en la dégradant de manière acceptable. Cette thèse se concentre sur les architectures arithmétiques approximatives afin de saisir cette opportunité. Tout d'abord, une étude critique de l'état de l'art des additionneurs et multiplieurs approximatifs est présentée. Ensuite, un modèle de propagation d'erreur virgule-fixe mettant en œuvre la densité spectrale de puissance est proposée, suivi d'un modèle de propagation du taux d'erreur binaire positionnel des opérateurs approximatifs. Les opérateurs approximatifs sont ensuite utilisés pour la reproduction des effets de la VOS dans les opérateurs arithmétiques exacts. Grâce à notre outil de travail open-source ApxPerf et ses bibliothèques synthétisables C++ apx_fixed pour les opérateurs approximatifs et ct_float pour l'arithmétique flottante basse consommation, deux études consécutives sont proposées, basées sur des applications de traitement du signal complexes. Tout d'abord, les opérateurs approximatifs sont comparés à l'arithmétique virgule-fixe, et la supériorité de la virgule-fixe est soulignée. Enfin, la virgule fixe est comparée aux petits flottants dans des conditions équivalentes. En fonction des conditions applicatives, la virgule-flottante montre une compétitivité inattendue face à la virgule-fixe. Les résultats et discussions de cette thèse donnent un regard nouveau sur l'arithmétique approximative et suggère de nouvelles directions pour le futur des architectures efficaces en énergie. / The physical limits being reached in silicon-based computing, new ways have to be found to overcome the predicted end of Moore's law. Many applications can tolerate approximations in their computations at several levels without degrading the quality of their output, or degrading it in an acceptable way. This thesis focuses on approximate arithmetic architectures to seize this opportunity. Firstly, a critical study of state-of-the-art approximate adders and multipliers is presented. Then, a model for fixed-point error propagation leveraging power spectral density is proposed, followed by a model for bitwise-error rate propagation of approximate operators. Approximate operators are then used for the reproduction of voltage over-scaling effects in exact arithmetic operators. Leveraging our open-source framework ApxPerf and its synthesizable template-based C++ libraries apx_fixed for approximate operators, and ct_float for low-power floating-point arithmetic, two consecutive studies are proposed leveraging complex signal processing applications. Firstly, approximate operators are compared to fixed-point arithmetic, and the superiority of fixed-point is highlighted. Secondly, fixed-point is compared to small-width floating-point in equivalent conditions. Depending on the applicative conditions, floating-point shows an unexpected competitiveness compared to fixed-point. The results and discussions of this thesis give a fresh look on approximate arithmetic and suggest new directions for the future of energy-efficient architectures.
|
8 |
Évaluation analytique de la précision des systèmes en virgule fixe pour des applications de communication numérique / Analytical approach for evaluation of the fixed point accuracyChakhari, Aymen 07 October 2014 (has links)
Par rapport à l'arithmétique virgule flottante, l'arithmétique virgule fixe se révèle plus avantageuse en termes de contraintes de coût et de consommation, cependant la conversion en arithmétique virgule fixe d'un algorithme spécifié initialement en virgule flottante se révèle être une tâche fastidieuse. Au sein de ce processus de conversion, l'une des étapes majeures concerne l'évaluation de la précision de la spécification en virgule fixe. En effet, le changement du format des données de l'application s'effectue en éliminant des bits ce qui conduit à la génération de bruits de quantification qui se propagent au sein du système et dégradent la précision des calculs en sortie de l'application. Par conséquent, cette perte de précision de calcul doit être maîtrisée et évaluée afin de garantir l'intégrité de l'algorithme et répondre aux spécifications initiales de l'application. Le travail mené dans le cadre de cette thèse se concentre sur des approches basées sur l'évaluation de la précision à travers des modèles analytiques (par opposition à l'approche par simulations). Ce travail traite en premier lieu de la recherche de modèles analytiques pour évaluer la précision des opérateurs non lisses de décision ainsi que la cascade d'opérateurs de décision. Par conséquent, la caractérisation de la propagation des erreurs de quantification dans la cascade d'opérateurs de décision est le fondement des modèles analytiques proposés. Ces modèles sont appliqués à la problématique de l'évaluation de la précision de l'algorithme de décodage sphérique SSFE (Selective Spanning with Fast Enumeration) utilisé pour les systèmes de transmission de type MIMO (Multiple-Input Multiple-Output). Dans une seconde étape, l'évaluation de la précision des structures itératives d'opérateurs de décision a fait l'objet d'intérêt. Une caractérisation des erreurs de quantification engendrées par l'utilisation de l'arithmétique en virgule fixe est menée afin de proposer des modèles analytiques basés sur l'estimation d'une borne supérieure de la probabilité d'erreur de décision ce qui permet de réduire les temps d'évaluation. Ces modèles sont ensuite appliqués à la problématique de l'évaluation de la spécification virgule fixe de l'égaliseur à retour de décision DFE (Decision Feedback Equalizer). Le second aspect du travail concerne l'optimisation des largeurs de données en virgule fixe. Ce processus d'optimisation est basé sur la minimisation de la probabilité d'erreur de décision dans le cadre d'une implémentation sur un FPGA (Field-Programmable Gate Array) de l'algorithme DFE complexe sous contrainte d'une précision donnée. Par conséquent, pour chaque spécification en virgule fixe, la précision est évaluée à travers les modèles analytiques proposés. L'estimation de la consommation des ressources et de la puissance sur le FPGA est ensuite obtenue à l'aide des outils de Xilinx pour faire un choix adéquat des largeurs des données en visant à un compromis précision/coût. La dernière phase de ce travail traite de la modélisation en virgule fixe des algorithmes de décodage itératif reposant sur les concepts de turbo-décodage et de décodage LDPC (Low-Density Parity-Check). L'approche proposée prend en compte la structure spécifique de ces algorithmes ce qui implique que les quantités calculées au sein du décodeur (ainsi que les opérations) soient quantifiées suivant une approche itérative. De plus, la représentation en virgule fixe utilisée (reposant sur le couple dynamique et le nombre de bits total) diffère de la représentation classique qui, elle, utilise le nombre de bits accordé à la partie entière et la partie fractionnaire. Avec une telle représentation, le choix de la dynamique engendre davantage de flexibilité puisque la dynamique n'est plus limitée uniquement à une puissance de deux. Enfin, la réduction de la taille des mémoires par des techniques de saturation et de troncature est proposée de manière à cibler des architectures à faible-complexité. / Traditionally, evaluation of accuracy is performed through two different approaches. The first approach is to perform simulations fixed-point implementation in order to assess its performance. These approaches based on simulation require large computing capacities and lead to prohibitive time evaluation. To avoid this problem, the work done in this thesis focuses on approaches based on the accuracy evaluation through analytical models. These models describe the behavior of the system through analytical expressions that evaluate a defined metric of precision. Several analytical models have been proposed to evaluate the fixed point accuracy of Linear Time Invariant systems (LTI) and of non-LTI non-recursive and recursive linear systems. The objective of this thesis is to propose analytical models to evaluate the accuracy of digital communications systems and algorithms of digital signal processing made up of non-smooth and non-linear operators in terms of noise. In a first step, analytical models for evaluation of the accuracy of decision operators and their iterations and cascades are provided. In a second step, an optimization of the data length is given for fixed-point hardware implementation of the Decision Feedback Equalizer DFE based on analytical models proposed and for iterative decoding algorithms such as turbo decoding and LDPC decoding-(Low-Density Parity-Check) in a particular quantization law. The first aspect of this work concerns the proposition analytical models for evaluating the accuracy of the non-smooth decision operators and the cascading of decision operators. So, the characterization of the quantization errors propagation in the cascade of decision operators is the basis of the proposed analytical models. These models are applied in a second step to evaluate the accuracy of the spherical decoding algorithmSSFE (Selective Spanning with Fast Enumeration) used for transmission MIMO systems (Multiple-Input Multiple -Output). In a second step, the accuracy evaluation of the iterative structures of decision operators has been the interesting subject. Characterization of quantization errors caused by the use of fixed-point arithmetic is introduced to result in analytical models to evaluate the accuracy of application of digital signal processing including iterative structures of decision. A second approach, based on the estimation of an upper bound of the decision error probability in the convergence mode, is proposed for evaluating the accuracy of these applications in order to reduce the evaluation time. These models are applied to the problem of evaluating the fixed-point specification of the Decision Feedback Equalizer DFE. The estimation of resources and power consumption on the FPGA is then obtained using the Xilinx tools to make a proper choice of the data widths aiming to a compromise accuracy/cost. The last step of our work concerns the fixed-point modeling of iterative decoding algorithms. A model of the turbo decoding algorithm and the LDPC decoding is then given. This approach integrates the particular structure of these algorithms which implies that the calculated quantities in the decoder and the operations are quantified following an iterative approach. Furthermore, the used fixed-point representation is different from the conventional representation using the number of bits accorded to the integer part and the fractional part. The proposed approach is based on the dynamic and the total number of bits. Besides, the dynamic choice causes more flexibility for fixed-point models since it is not limited to only a power of two.
|
9 |
Harnessing resilience: biased voltage overscaling for probabilistic signal processingGeorge, Jason 26 October 2011 (has links)
A central component of modern computing is the idea that computation requires
determinism. Contrary to this belief, the primary contribution of this work shows that
useful computation can be accomplished in an error-prone fashion. Focusing on low-power
computing and the increasing push toward energy conservation, the work seeks to sacrifice
accuracy in exchange for energy savings.
Probabilistic computing forms the basis for this error-prone computation by diverging from the requirement of determinism and allowing for randomness within computing.
Implemented as probabilistic CMOS (PCMOS), the approach realizes enormous energy sav-
ings in applications that require probability at an algorithmic level. Extending probabilistic
computing to applications that are inherently deterministic, the biased voltage overscaling
(BIVOS) technique presented here constrains the randomness introduced through PCMOS.
Doing so, BIVOS is able to limit the magnitude of any resulting deviations and realizes
energy savings with minimal impact to application quality.
Implemented for a ripple-carry adder, array multiplier, and finite-impulse-response (FIR)
filter; a BIVOS solution substantially reduces energy consumption and does so with im-
proved error rates compared to an energy equivalent reduced-precision solution. When
applied to H.264 video decoding, a BIVOS solution is able to achieve a 33.9% reduction in
energy consumption while maintaining a peak-signal-to-noise ratio of 35.0dB (compared to
14.3dB for a comparable reduced-precision solution).
While the work presented here focuses on a specific technology, the technique realized
through BIVOS has far broader implications. It is the departure from the conventional
mindset that useful computation requires determinism that represents the primary innovation of this work. With applicability to emerging and yet to be discovered technologies,
BIVOS has the potential to contribute to computing in a variety of fashions.
|
10 |
Parallel gaming related algorithms for an embedded media processorTolunay, John January 2012 (has links)
A new type of computing architecture called ePUMA is under development by the ePUMA Research Team at the Department of Electrical Engineering at Linköping University in Linköping. This contains several single instruction multiple data (SIMD) cores, which are called SIMD Units, where up to 64 computations can be done in parallel. The goal with the architecture is to create a low-power chip with good performance for embedded applications. One possible application is video games. In this work we have studied a selected set of video game related algorithms, including a Pseudo-Random Number Generator, Clipping and Rasterization & Fragment Processing, analyzing how well they fit the ePUMA platform.
|
Page generated in 0.103 seconds