691 |
DIGITAL GAIN ERROR CORRECTION TECHNIQUE FOR 8-BIT PIPELINE ADCjaveed, khalid January 2010 (has links)
An analog-to-digital converter (ADC) is a link between the analog and digital domains and plays a vital role in modern mixed signal processing systems. There are several architectures, for example flash ADCs, pipeline ADCs, sigma delta ADCs,successive approximation (SAR) ADCs and time interleaved ADCs. Among the various architectures, the pipeline ADC offers a favorable trade-off between speed,power consumption, resolution, and design effort. The commonly used applications of pipeline ADCs include high quality video systems, radio base stations,Ethernet, cable modems and high performance digital communication systems.Unfortunately, static errors like comparators offset errors, capacitors mismatch errors and gain errors degrade the performance of the pipeline ADC. Hence, there is need for accuracy enhancement techniques. The conventional way to overcome these mentioned errors is to calibrate the pipeline ADC after fabrication, the so-called post fabrication calibration techniques. But environmental changes like temperature and device aging necessitates the recalibration after regular intervals of time, resulting in a loss of time and money. A lot of effort can be saved if the digital outputs of the pipeline ADC can be used for the estimation and correctionof these errors, further classified as foreground and background techniques. In this thesis work, an algorithm is proposed that can estimate 10% inter stage gain errors in pipeline ADC without any need for a special calibration signal. The efficiency of the proposed algorithm is investigated on an 8-bit pipeline ADC architecture.The first seven stages are implemented using the 1.5-bit/stage architecture whilethe last stage is a one-bit flash ADC. The ADC and error correction algorithms simulated in Matlab and the signal to noise and distortion ratio (SNDR) is calculated to evaluate its efficiency.
|
692 |
On Generating Complex Numbers for FFT and NCO Using the CORDIC Algorithm / Att generera komplexa tal för FFT och NCO med CORDIC-algoritmenAndersson, Anton January 2008 (has links)
This report has been compiled to document the thesis work carried out by Anton Andersson for Coresonic AB. The task was to develop an accelerator that could generate complex numbers suitable for fast fourier transforms (FFT) and tuning the phase of complex signals (NCO). Of many ways to achieve this, the CORDIC algorithm was chosen. It is very well suited since the basic implementation allows rotation of 2D-vectors using only shift and add operations. Error bounds and proof of convergence are derived carefully The accelerator was implemented in VHDL in such a way that all critical parameters were easy to change. Performance measures were extracted by simulating realistic test cases and then compare the output with reference data precomputed with high precision. Hardware costs were estimated by synthesizing a set of different configurations. Utilizing graphs of performance versus cost makes it possible to choose an optimal configuration. Maximum errors were extracted from simulations and seemed rather large for some configurations. The maximum error distribution was then plotted in histograms revealing that the typical error is often much smaller than the largest one. Even after trouble-shooting, the errors still seem to be somewhat larger than what other implementations of CORDIC achieve. However, precision was concluded to be sufficient for targeted applications. / Den här rapporten dokumenterar det examensarbete som utförts av AntonAndersson för Coresonic AB. Uppgiften bestod i att utveckla enaccelerator som kan generera komplexa tal som är lämpliga att använda försnabba fouriertransformer (FFT) och till fasvridning av komplexasignaler (NCO). Det finns en mängd sätt att göra detta men valet föllpå en algoritm kallad CORDIC. Den är mycket lämplig då den kan rotera2D-vektorer godtycklig vinkel med enkla operationer som bitskift ochaddition. Felgränser och konvergens härleds noggrannt. Acceleratorn implementerades i språket VHDL med målet att kritiskaparametrar enkelt skall kunna förändras. Därefter simuleradesmodellen i realistiska testfall och resulteten jämfördes medreferensdata som förberäknats med mycket hög precision. Dessutomsyntetiserades en mängd olika konfigurationer så att prestanda enkeltkan viktas mot kostnad.Ur de koefficienter som erhölls genom simuleringar beräknades detstörsta erhållna felet för en mängd olika konfigurationer. Felenverkade till en början onormalt stora vilket krävde vidareundersökning. Samtliga fel från en konfiguration ritades ihistogramform, vilket visade att det typiska felet oftast varbetydligt mindre än det största. Även efter felsökning verkar acceleratorngenerera tal med något större fel än andra implementationer avCORDIC. Precisionen anses dock vara tillräcklig för avsedda applikationer.
|
693 |
Comparative advantage, Exports and Economic GrowthRiaz, Bushra January 2011 (has links)
This study investigates the causal relationship among comparative advantage, exports and economic growth by using the time series annual data for the period of 1980-2009 on 13 developing countries. The purpose is to develop an understanding of causal relationship and explore the differences or similarities among different sectors of several developing countries that are in different stages of development and how their comparative advantage influences the exports, which further effect the economic growth of the country. The co-integration and the vector error correction techniques are used to explore the causal relationship among the three variables. The results suggest bi-directional or mutual long run relationship between comparative advantage, exports and economic growth in most of the developing countries. The overall long run results of the study favour the export led growth hypothesis that exports precede the growth in case of all countries except for Malaysia, Pakistan and Sri Lanka. The short run mutual relationship exists mostly among the three variables except for Malaysian exports and growth and its comparative advantage and GDP and for Singapore‟s exports and growth. The short run causality runs from exports to gross domestic product (GDP). So overall, short run results favour export led growth in all cases except for Malaysia, Nepal and Sri Lanka.
|
694 |
Equity Valuation : An examination of which investment valuation method appears to attain the closest value to the market price of a stockSöderlund, Nathalie January 2011 (has links)
PURPOSE- This paper empirically evaluate the ability among various types of parsimonious equity valuation models in order to ascertain which model represents the value of equity the best and thereby manage to withstand factors causing valuation errors. The more complicated models applied, the more underlying assumptions are needed. The trade-off here, which will be investigated, is if the benefit of using more difficult models outweighs the cost of including the extra assumptions. Further on the empirical research´s results will be compared with the results provided by this previous studies examinating American companies. METHOD- Six valuation models using a discounting valuation method are evaluated; the Present Value of Expected Dividends (PVED), Residual Income Valuation (RIV), Residual Income Valuation Terminal Value Constrained [RIV(TVC)], Abnormal Earning Growth approach (AEG), Abnormal Earning Growth Terminal Value Constrained approach [AEG(TVC)] and Free Cash Flow to the Firm model (FCFF). The five latter investment models are all based on the first model. FINDINGS- The aim of finding the smallest absolute valuation error in the empirical study is given to PVED, a model including little underlying assumptions and inputs. Hence, the implication of the application of valuation models can be summarized as that there are no clear benefits of applying complex models for Swedish companies, and the trade-off between using more complex models and thereby including more assumptions is not compelling given that the benefit does not exceed the cost. All the earnings methods are all found to be superior to the FCFF model, while the constrained RIV and AEG methods provide higher valuation errors than the unconstrained versions. The superiority of the PVED model is inconsistent with the previous results examining American firms, in which the RIV model is preferred. One of the reasons for the difference is the use of different accounting standards in the counties, and thereby the companies´ capital structure and the inputs used in the investment valuation may be somewhat unlike.
|
695 |
Hardware Accelerator for Duo-binary CTC decoding : Algorithm Selection, HW/SW Partitioning and FPGA ImplementationBjärmark, Joakim, Strandberg, Marco January 2006 (has links)
Wireless communication is always struggling with errors in the transmission. The digital data received from the radio channel is often erroneous due to thermal noise and fading. The error rate can be lowered by using higher transmission power or by using an effective error correcting code. Power consumption and limits for electromagnetic radiation are two of the main problems with handheld devices today and an efficient error correcting code will lower the transmission power and therefore also the power consumption of the device. Duo-binary CTC is an improvement of the innovative turbo codes presented in 1996 by Berrou and Glavieux and is in use in many of today's standards for radio communication i.e. IEEE 802.16 (WiMAX) and DVB-RSC. This report describes the development of a duo-binary CTC decoder and the different problems that were encountered during the process. These problems include different design issues and algorithm choices during the design. An implementation in VHDL has been written for Alteras Stratix II S90 FPGA and a reference-model has been made in Matlab. The model has been used to simulate bit error rates for different implementation alternatives and as bit-true reference for the hardware verification. The final result is a duo-binary CTC decoder compatible with Alteras Stratix II designs and a reference model that can be used when simulating the decoder alone or the whole signal processing chain. Some of the features of the hardware are that block sizes, puncture rates and number of iterations are dynamically configured between each block Before synthesis it is possible to choose how many decoders that will work in parallel and how many bits the soft input will be represented in. The circuit has been run in 100 MHz in the lab and that gives a throughput around 50Mbit with four decoders working in parallel. This report describes the implementation, including its development, background and future possibilities.
|
696 |
On Constructing Low-Density Parity-Check CodesMa, Xudong January 2007 (has links)
This thesis focuses on designing Low-Density Parity-Check (LDPC)
codes for forward-error-correction. The target application is
real-time multimedia communications over packet networks. We
investigate two code design issues, which are important in the
target application scenarios, designing LDPC codes with low
decoding latency, and constructing capacity-approaching LDPC codes
with very low error probabilities.
On designing LDPC codes with low decoding latency, we present a
framework for optimizing the code parameters so that the decoding
can be fulfilled after only a small number of iterative decoding
iterations. The brute force approach for such optimization is
numerical intractable, because it involves a difficult discrete
optimization programming. In this thesis, we show an asymptotic
approximation to the number of decoding iterations. Based on this
asymptotic approximation, we propose an approximate optimization
framework for finding near-optimal code parameters, so that the
number of decoding iterations is minimized. The approximate
optimization approach is numerically tractable. Numerical results
confirm that the proposed optimization approach has excellent
numerical properties, and codes with excellent performance in terms
of number of decoding iterations can be obtained. Our results show
that the numbers of decoding iterations of the codes by the proposed
design approach can be as small as one-fifth of the numbers of
decoding iterations of some previously well-known codes. The
numerical results also show that the proposed asymptotic
approximation is generally tight for even non-extremely limiting
cases.
On constructing capacity-approaching LDPC codes with very low error
probabilities, we propose a new LDPC code construction scheme based
on $2$-lifts. Based on stopping set distribution analysis, we
propose design criteria for the resulting codes to have very low
error floors. High error floors are the main problems of previously
constructed capacity-approaching codes, which prevent them from
achieving very low error probabilities. Numerical results confirm
that codes with very low error floors can be obtained by the
proposed code construction scheme and the design criteria. Compared
with the codes by the previous standard construction schemes, which
have error floors at the levels of $10^{-3}$ to $10^{-4}$, the codes
by the proposed approach do not have observable error floors at the
levels higher than $10^{-7}$. The error floors of the codes by the
proposed approach are also significantly lower compared with the
codes by the previous approaches to constructing codes with low
error floors.
|
697 |
Residual-Based Isotropic and Anisotropic Mesh Adaptation for Computational Fluid DynamicsBaserinia, Amir Reza January 2008 (has links)
The accuracy of a fluid flow simulation depends not only on the numerical method used for discretizing the governing equations, but also on the distribution and topology of the mesh elements. Mesh adaptation is a technique for automatically modifying the mesh in order to improve the simulation accuracy in an attempt to reduce the manual work required for mesh generation. The conventional approach to mesh adaptation is based on a feature-based criterion that identifies the distinctive features in the flow field such as shock waves and boundary layers. Although this approach has proved to be simple and effective in many CFD applications, its implementation may require a lot of trial and error for determining the appropriate criterion in certain applications. An alternative approach to mesh adaptation is the residual-based approach in which the discretization error of the fluid flow quantities across the mesh faces is used to construct an adaptation criterion. Although this approach provides a general framework for developing robust mesh adaptation criteria, its incorporation leads to significant computational overhead.
The main objective of the thesis is to present a methodology for developing an appropriate mesh adaptation criterion for fluid flow problems that offers the simplicity of a feature-based criterion and the robustness of a residual-based criterion. This methodology is demonstrated in the context of a second-order accurate cell-centred finite volume method for simulating laminar steady incompressible flows of constant property fluids. In this methodology, the error of mass and momentum flows across the faces of each control volume are estimated with a Taylor series analysis. Then these face flow errors are used to construct the desired adaptation criteria for triangular isotropic meshes and quadrilateral anisotropic meshes. The adaptation results for the lid-driven cavity flow show that the solution error on the resulting adapted meshes is 80 to 90 percent lower than that of a uniform mesh with the same number of control volumes.
The advantage of the proposed mesh adaptation method is the capability to produce meshes that lead to more accurate solutions compared to those of the conventional methods with approximately the same amount of computational effort.
|
698 |
Standardization and use of colour for labelling of injectable drugsJeon, Hyae Won Jennifer January 2008 (has links)
Medication errors are one of the most common causes of patient injuries in healthcare systems. Poor labelling has been identified as a contributing factor of medication errors, particularly for those involving injectable drugs. Colour coding and colour differentiation are two major techniques being used on labels to aid drug identification. However, neither approach has been scientifically proven to minimize the occurrence of or harm from medication errors. This thesis investigates potential effects of different approaches for using colour on standardized labels on the task of identifying a specific drug from a storage area via a controlled experiment involving human users. Three different ways of using colour were compared: labels where only black, white and grey are used; labels where a unique colour scheme adopted from an existing manufacturer’s label is applied to each drug; colour coded labels based on the product’s strength level within the product line. The results show that people might be vulnerable to confusion from drugs that have look-alike labels and also have look-alike, sound-alike drug names. In particular, when each drug label had a fairly unique colour scheme, participants were more prone to misperceive the look-alike, sound-alike drug name as the correct drug name than when no colour was used or when colour was used on the labels with no apparent one-to-one association between the label colour and the drug identity. This result could suggest a perceptual bias to perceive stimuli as the expected stimuli especially when the task involved is familiar and the stimuli look similar to the expected stimuli. Moreover, the results suggest a potential problem that may arise from standardizing existing labels if careful consideration is not given to the effects of reduced visual variations among the labels of different products on how the colours of the labels are perceived and used for drug identification. The thesis concludes with recommendations for improving the existing standard for labelling of injectable drug containers and for avoiding medication errors due to labelling and packaging in general.
|
699 |
Error Detection in Number-Theoretic and Algebraic AlgorithmsVasiga, Troy Michael John January 2008 (has links)
CPU's are unreliable: at any point in a computation, a bit may be altered with some (small) probability. This probability may seem negligible, but for large calculations (i.e., months of CPU time), the likelihood of an error being introduced becomes increasingly significant. Relying on this fact, this thesis defines a statistical measure called robustness, and measures the robustness of several number-theoretic and algebraic algorithms.
Consider an algorithm A that implements function f, such that f has range O and algorithm A has range O' where O⊆O'. That is, the algorithm may produce results which are not in the possible range of the function. Specifically, given an algorithm A and a function f, this thesis classifies the output of A into one of three categories:
1. Correct and feasible -- the algorithm computes the correct result,
2. Incorrect and feasible -- the algorithm computes an incorrect result and this output is in O,
3. Incorrect and infeasible -- the algorithm computes an incorrect result and output is in O'\O.
Using probabilistic measures, we apply this classification scheme to quantify the robustness of algorithms for computing primality (i.e., the Lucas-Lehmer and Pepin tests), group order and quadratic residues.
Moreover, we show that typically, there
will be an "error threshold" above which the algorithm is unreliable (that is, it will rarely give the correct result).
|
700 |
Fish (Oreochromis niloticus) as a Model of Refractive Error DevelopmentShen, Wei January 2008 (has links)
Myopia is a common ocular condition worldwide and the mechanism of myopia is still not clear. A number of animal models of myopia and refractive error development have been proposed. The fact that form deprivation myopia could be induced in tilapia fish, as shown previously in my research, suggests the possibility that tilapia could be a new animal model for myopia research. In the first part of this thesis the tilapia model was perfected and then, based on this model, the effect of systemic hormones (thyroid hormones) associated with eye and body development was investigated during refractive error development. Lastly, the physiological and morphological changes on the retina were further studied with optical coherence tomography (OCT).
In these experiments, significant amounts of myopia, and hyperopia were induced within two weeks using goggles with lens inserts as in other higher vertebrate animal models, e.g. chicks. The results from form deprivation treatment also show that the sensitivity of tilapia eyes may be an age related effect during the emmetropization process. The larger the fish, the less hyperopic the fish eye, though the small eye artefact may be a factor. The susceptibility of the refractive development of the eye to the visual environment may be also linked to plasma hormone levels. It was found that induced refractive errors could be shifted in the hyperopic direction with high levels of thyroid hormones. Also, after 2 weeks of treatment with negative or positive lens/goggles, the tilapia retina becomes thinner or thicker, respectively. When the goggles are removed, the thickness of the retina changes within hours and gradually returns to normal. However, the circadian retinomotor movement is a complicating factor since it affects the retinal thickness measurement with OCT at some time points.
In conclusion, tilapia represent a good lower vertebrate model for myopia research, suggesting a universal mechanism of myopia development, which may involve systemic hormones and immediate, short term retinal responses.
|
Page generated in 0.0238 seconds