• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 47
  • 47
  • 14
  • 13
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Techniques for Efficient Implementation of FIR and Particle Filtering

Alam, Syed Asad January 2016 (has links)
FIR filters occupy a central place many signal processing applications which either alter the shape, frequency or the sampling frequency of the signal. FIR filters are used because of their stability and possibility to have linear-phase but require a high filter order to achieve the same magnitude specifications as compared to IIR filters. Depending on the size of the required transition bandwidth the filter order can range from tens to hundreds to even thousands. Since the implementation of the filters in digital domain requires multipliers and adders, high filter orders translate to a large number of these arithmetic units for its implementation. Research towards reducing the complexity of FIR filters has been going on for decades and the techniques used can be roughly divided into two categories; reduction in the number of multipliers and simplification of the multiplier implementation.  One technique to reduce the number of multipliers is to use cascaded sub-filters with lower complexity to achieve the desired specification, known as FRM. One of the sub-filters is a upsampled model filter whose band edges are an integer multiple, termed as the period L, of the target filter's band edges. Other sub-filters may include complement and masking filters which filter different parts of the spectrum to achieve the desired response. From an implementation point-of-view, time-multiplexing is beneficial because generally the allowable maximum clock frequency supported by the current state-of-the-art semiconductor technology does not correspond to the application bound sample rate. A combination of these two techniques plays a significant role towards efficient implementation of FIR filters. Part of the work presented in this dissertation is architectures for time-multiplexed FRM filters that benefit from the inherent sparsity of the periodic model filters. These time-multiplexed FRM filters not only reduce the number of multipliers but lowers the memory usage. Although the FRM technique requires a higher number delay elements, it results in fewer memories and more energy efficient memory schemes when time-multiplexed. Different memory arrangements and memory access schemes have also been discussed and compared in terms of their efficiency when using both single and dual-port memories. An efficient pipelining scheme has been proposed which reduces the number of pipelining registers while achieving similar clock frequencies. The single optimal point where the number of multiplications is minimum for non-time-multiplexed FRM filters is shown to become a function of both the period, L and time-multiplexing factor, M. This means that the minimum number of multipliers does not always correspond to the minimum number of multiplications which also increases the flexibility of implementation. These filters are shown to achieve power reduction between 23% and 68% for the considered examples. To simplify the multiplier, alternate number systems like the LNS have been used to implement FIR filters, which reduces the multiplications to additions. FIR filters are realized by directly designing them using ILP in the LNS domain in the minimax sense using finite word length constraints. The branch and bound algorithm, a typical algorithm to implement ILP problems, is implemented based on LNS integers and several branching strategies are proposed and evaluated. The filter coefficients thus obtained are compared with the traditional finite word length coefficients obtained in the linear domain. It is shown that LNS FIR filters provide a better approximation  error compared to a standard FIR filter for a given coefficient word length. FIR filters also offer an opportunity in complexity reduction by implementing the multipliers using Booth or standard high-radix multiplication. Both of these multiplication schemes generate pre-computed multiples of the multiplicand which are then selected based on the encoded bits of the multiplier. In TDF FIR filters, one input data is multiplied with a number of coefficients and complexity can be reduced by sharing the pre-computation of the multiplies of the input data for all multiplications. Part of this work includes a systematic and unified approach to the design of such computation sharing multipliers and a comparison of the two forms of multiplication. It also gives closed form expressions for the cost of different parts of multiplication and gives an overview of various ways to implement the select unit with respect to the design of multiplexers. Particle filters are used to solve problems that require estimation of a system. Improved resampling schemes for reducing the latency of the resampling stage is proposed which uses a pre-fetch technique to reduce the latency between 50% to 95%  dependent on the number of pre-fetches. Generalized division-free architectures and compact memory structures are also proposed that map to different resampling algorithms and also help in reducing the complexity of the multinomial resampling algorithm and reduce the number of memories required by up to 50%.
2

FQPSK-B Baseband Filter Alternatives

Jefferis, Robert 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / Designers of small airborne FQPSK-B (-B) transmitters face at least two significant challenges. First, many U.S. Department of Defense (DOD) test applications require that transmitters accommodate a continuum of data rates from 1, to at least 20 Mb/s in one design. Another challenge stems from the need to package a high-speed digital baseband signal generator in very close proximity to radio frequency (RF) circuitry required for 1.4 to 2.4 GHz operation. The -B baseband filter options prescribed by Digcom/Feher [2] are a major contributor to variable data rate design challenges. This paper summarizes a study of -B filter alternatives and introduces FQPSK-JR (JR), an alternative to -B that can simplify digital baseband transmitter designs. Very short impulse response digital filters are used to produce essentially the same spectral efficiency and nonlinear amplifier (NLA) compatibility as -B while preserving or improving detection efficiency (DE). In addition, a strategy for eliminating baseband shaping filters is briefly discussed. New signaling wavelets and, modified wavelet versus symbol sequence mapping rules associated with them, can be captured from a wide range of alternative filter designs.
3

Frekvensuppdelning med FPGA

Ivebrink, Pontus, Ytterström, Peter January 2008 (has links)
<p>Examensarbetets syfte var att skapa ett frekvensspektrum för ljud. För att representera detta frekvensspektrum används staplar av lysdioder. Systemet implementeras på ett Altera DE2 utvecklingskort. Olika sätt för att skapa dessa frekvensuppdelningar har testats och olika metoder för att lösa dessa har också testats.</p><p>Den slutliga implementeringen består av en filterbank som utnyttjar nersampling för att återanvända filter och sänka ordningen på dessa. Det största problemet var att få plats med allt på den FPGA som användes. Genom att byta till en lite mer komplicerad men effektivare filterstruktur så löstes detta problem och vi fick även gott om utrymme över.</p><p>Manualer och datablad har inte alltid varit lätta att tolka och ibland har andra metoder använts än de som beskrivs i dessa manualer med tips från support forum och handledare. Det finns vissa förbättringar att göra och vissa saker skulle kunnat göras annorlunda för att spara resurser med ett lite sämre resultat. När projektet var klart hade alla krav som ställts uppfyllts.</p>
4

Frekvensuppdelning med FPGA

Ivebrink, Pontus, Ytterström, Peter January 2008 (has links)
Examensarbetets syfte var att skapa ett frekvensspektrum för ljud. För att representera detta frekvensspektrum används staplar av lysdioder. Systemet implementeras på ett Altera DE2 utvecklingskort. Olika sätt för att skapa dessa frekvensuppdelningar har testats och olika metoder för att lösa dessa har också testats. Den slutliga implementeringen består av en filterbank som utnyttjar nersampling för att återanvända filter och sänka ordningen på dessa. Det största problemet var att få plats med allt på den FPGA som användes. Genom att byta till en lite mer komplicerad men effektivare filterstruktur så löstes detta problem och vi fick även gott om utrymme över. Manualer och datablad har inte alltid varit lätta att tolka och ibland har andra metoder använts än de som beskrivs i dessa manualer med tips från support forum och handledare. Det finns vissa förbättringar att göra och vissa saker skulle kunnat göras annorlunda för att spara resurser med ett lite sämre resultat. När projektet var klart hade alla krav som ställts uppfyllts.
5

FPGA Implementation of an Interpolator for PWM applications

Bajramovic, Jasko January 2007 (has links)
<p>In this thesis, a multirate realization of an interpolation operation is explored. As one of the requirements for proper functionality of the digital pulse-width modulator, a 16-bit digital input signal is to be upsampled 32 times. To obtain the required oversampling ratio, five separate interpolator stages were designed and implemented. Each interpolator stage performed uppsampling by a factor of two followed by an image-rejection lowpass FIR filter. Since, each individual interpolator stage upsamples the input signal by a factor of two, interpolation filters were realized as a half-band FIR filters. This kind of linear-phase FIR filters have a nice property of having every other filter coefficient equal to zero except for the middle one which equals 0.5. By utilizing the half-band FIR filters for the actual realization of the interpolation filters, the overall computational complexity was substantially reduced. In addition, several multirate techniques have been utilized for deriving more efficient interpolator structures. Hence, the impulse response of individual interpolator filters was rewritten into its corresponding polyphase form. This further simplifies the interpolator realization. To eliminate multiplication by 0.5 in one of two polyphase subfilters, the filter gain was deliberately increased by a factor of two. Thus, one polyphase path only contained delay elements. In addition, for the realization of filter multipliers, a multiple constant multiplication, (MCM), algorithm was utilized. The idea behind the MCM algorithm, was to perform multiplication operations as a number of addition operations and appropriate input signal shifts. As a result, less hardware was needed for the actual interpolation chain implementation. For the correct functionality of the interpolator chain, scaling coefficients were introduced into the each interpolation stage. This is done in order to reduce the possibility of overflow. For the scaling process, a safe scaling method was used. The actual quantization noise generated by the interpolator chain was also estimated and appropriate system adjustments were performed.</p>
6

FPGA Implementation of an Interpolator for PWM applications

Bajramovic, Jasko January 2007 (has links)
In this thesis, a multirate realization of an interpolation operation is explored. As one of the requirements for proper functionality of the digital pulse-width modulator, a 16-bit digital input signal is to be upsampled 32 times. To obtain the required oversampling ratio, five separate interpolator stages were designed and implemented. Each interpolator stage performed uppsampling by a factor of two followed by an image-rejection lowpass FIR filter. Since, each individual interpolator stage upsamples the input signal by a factor of two, interpolation filters were realized as a half-band FIR filters. This kind of linear-phase FIR filters have a nice property of having every other filter coefficient equal to zero except for the middle one which equals 0.5. By utilizing the half-band FIR filters for the actual realization of the interpolation filters, the overall computational complexity was substantially reduced. In addition, several multirate techniques have been utilized for deriving more efficient interpolator structures. Hence, the impulse response of individual interpolator filters was rewritten into its corresponding polyphase form. This further simplifies the interpolator realization. To eliminate multiplication by 0.5 in one of two polyphase subfilters, the filter gain was deliberately increased by a factor of two. Thus, one polyphase path only contained delay elements. In addition, for the realization of filter multipliers, a multiple constant multiplication, (MCM), algorithm was utilized. The idea behind the MCM algorithm, was to perform multiplication operations as a number of addition operations and appropriate input signal shifts. As a result, less hardware was needed for the actual interpolation chain implementation. For the correct functionality of the interpolator chain, scaling coefficients were introduced into the each interpolation stage. This is done in order to reduce the possibility of overflow. For the scaling process, a safe scaling method was used. The actual quantization noise generated by the interpolator chain was also estimated and appropriate system adjustments were performed.
7

Development of a Driver Model for Vehicle Testing / Framtagning av förarmodell för fordonstester

Jansson, Andreas, Olsson, Erik January 2013 (has links)
The safety requirements for vehicles are today high and they will become more stringent in the future. The car companies test their products every day to ensure that safety requirements are met. These tests are often done by professional drivers. If the car is tested in an everyday traffic situation, a normal experienced driver is desired. A drawback is that a human will eventually learn the manoeuvre he/she is told to do. An artificial driver is therefore to prefer to make the test repeatable. This thesis’ purpose is to develop and implement an artificial driver as a controller in order to follow a predefined trajectory. The driver model’s performance driving a double lane change manoeuvre should be as close to a real driver’s as possible. Data was gathered by inviting people to drive in a simulator. The results from the simulator tests were used to implement three different drivers with different experiences. The gathered data was used to categorize the test drivers into different driver types for each specific velocity by using the vehicle position from thetest results. This thesis studies the driver from a controller’s perspective and it resulted in two implemented controllers for reference tracking. The first approach was a Model Predictive Controller with reference tracking and the other approach was to use a FIR-filter in order to describe the drivers’ characteristics. A vehicle model was implemented in order to do the double lane change manoeuvre in a simulation environment together with the implemented driver model. The results show that the two approaches can be used for reference tracking. The MPC showed good results with the recreation of the test runs that were made by the categorized drivers. The FIR-filter had problems to mimic the drivers’ test runs and their characteristics. The advantage with MPC is its robustness, while the advantages with the FIR-filter are its, in comparison, simplicity in the implementation and the algorithm’s low computational cost. In order to make the FIR-filter more robust, some improvements have to be made. One improvement is to use gain scheduling in order to adjust the filter coefficients depending on thevelocity. / De säkerhetskraven som idag ställs på fordon är höga och det kommer bli mer strikt i framtiden. Bilföretag testar sina bilar varje dag för att se om komponenterna och bilen klarar säkerhetskraven som ställs. Till dessa tester används professionella testförare. I en vardaglig trafiksituation är det önskvärt att en normalt erfaren bilförare utför testen. En mänsklig förare kommer använda sin inlärningsförmåga vid repeterande manöver, vilket inte är önskvärt. En artificiell förare är därför att föredra. Den artificiella föraren ska köra så likt en verklig förare som möjligt vid en "double lane change"- (DLC) manöver. Detta examensarbete har som avsikt att implementera en förare som en regulator för att kunna följa en förutbestämd trajektoria på samma sätt som en verklig förare. I detta examensarbete har "DLC"-manövern studerats. I examensarbetet har insamlad data från testförare använts för att kunna implementera tre olika förartyper med olika erfarenheter. Den insamlade datan användes till att kategorisera testförarna för varje särskild hastighet. Två tillvägagångssätt har gjorts med föraren, en där föraren är en modellbaserad prediktionsregulator med referensignalsföljning (MPC) och en där föraren implementeras som ett ändligt impulssvarsfilter (FIR-filter). En fordonsmodell har implementerats för att en "DLC"-manöver ska kunna testas i en simuleringsmiljö. Resultaten blev att de två metoderna klarade av referensföljningen. MPC:n var bra på att återskapa testförararnas körningar. FIR-filtret hade problem med att härma förarnas körningar och deras karaktäristik. Fördelen med MPC är dessrobusthet och fördelen med FIR-filtret är dess, i jämförelse, simplicitet vid implementering samt den låga beräkningskostnaden för algoritmen. För att göra FIR-filtret mer robust måste förbättringar göras. En förbättring är att använda gain scheduling för att anpassa filterkoefficienterna beroende på hastigheten.
8

Utveckling av prototyp för uppmätning av blodflöde med Dopplersensorer

Johansson, Tomas January 2016 (has links)
The current most common methods for measuring a blood vessels flow with Doppler technique requires a cable between the patient and measuring instrument. In today’s technology and the progress made in microelectronics have made it possible to manufacture ultrasonic transmitters and receivers, control electronics and antennas small enough for them to be integrated in a probe attached to the blood vessel. In order to read the flow of blood used NFC to securely send the information wireless to a smartphone or a tablet. This ensures that the cable between the patient and the measuring instrument would not be needed and the patient possibility to move would increase. So this thesis was to continue on a prototype using a Raspberry Pi and other medical equipment to approach toward the ultimate objective. So with help of filtering and amplification the target was to reduce noise and amplify the signal so that the correct data was send to the recipient’s smartphone or tablet. / De nuvarande vanligaste metoderna för att mäta upp ett blodkärls blodflöde med Doppler-teknik kräver en kabel mellan patient och mätinstrument. I dagens teknik och de framsteg som har gjorts inom mikroelektroniken har gjort det möjligt att tillverka ultraljudssändare och mottagare, styrelektronik och antenner i tillräckligt liten storlek för att de ska kunna integreras i en prob som fästs vid blodkärlet. För att kunna läsa av blodflödet används NFC för att säkert kunna skicka över information trådlöst till en smartphone eller en surfplatta. Detta medför att kabeln mellan patienten och mätinstrumentet inte skulle behövas och patientens möjlighet att röra sig skulle öka. Så det här examensarbetet gick ut på att fortsätta på en prototyp med hjälp av en Raspberry Pi och annan medicinsk utrustning för att närma sig det slutgiltiga målet. Så med hjälp av filtrering och förstärkning är målet att minska störningar och förstärka signalen så att rätt data skickas till mottagarens smartphone eller surfplatta.
9

Chebyshev Approximation of Discrete polynomials and Splines

Park, Jae H. 31 December 1999 (has links)
The recent development of the impulse/summation approach for efficient B-spline computation in the discrete domain should increase the use of B-splines in many applications. Because we show here how the impulse/summation approach can also be used for constructing polynomials, the approach with a search table approach for the inverse square root operation allows an efficient shading algorithm for rendering an image in a computer graphics system. The approach reduces the number of multiplies and makes it possible for the entire rendering process to be implemented using an integer processor. In many applications, Chebyshev approximation with polynomials and splines is useful in representing a stream of data or a function. Because the impulse/summation approach is developed for discrete systems, some aspects of traditional continuous approximation are not applicable. For example, the lack of the continuity concept in the discrete domain affects the definition of the local extrema of a function. Thus, the method of finding the extrema must be changed. Both forward differences and backward differences must be checked to find extrema instead of using the first derivative in the continuous domain approximation. Polynomial Chebyshev approximation in the discrete domain, just as in the continuous domain, forms a Chebyshev system. Therefore, the Chebyshev approximation process always produces a unique best approximation. Because of the non-linearity of free knot polynomial spline systems, there may be more than one best solution and the convexity of the solution space cannot be guaranteed. Thus, a Remez Exchange Algorithm may not produce an optimal approximation. However, we show that the discrete polynomial splines approximate a function using a smaller number of parameters (for a similar minimax error) than the discrete polynomials do. Also, the discrete polynomial spline requires much less computation and hardware than the discrete polynomial for curve generation when we use the impulse/summation approach. This is demonstrated using two approximated FIR filter implementations. / Ph. D.
10

Comparison of Hilbert Transform and Derivative Methods for Converting ECG Data Into Cardioid Plots to Detect Heart Abnormalities

Goldie, Robert George 01 June 2021 (has links) (PDF)
Electrocardiogram (ECG) time-domain signals contain important information about the heart. Several techniques have been proposed for creating a two-dimensional visualization of an ECG, called a Cardioid, that can be used to detect heart abnormalities with computer algorithms. The derivative method is the prevailing technique, which is popular for its low complexity, but it can introduce distortion into the Cardioid plot without additional signal processing. The Hilbert transform is an alternative method which has unity gain and phase shifts the ECG signal by 90 degrees to create the Cardioid plot. However, the Hilbert transform is seldom used and has historically been implemented with a computationally expensive process. In this thesis we show a low-complexity method for implementing the Hilbert transform as a finite impulse response (FIR) filter. We compare the fundamental differences between Cardioid plots generated with the derivative and Hilbert transform methods and demonstrate the feature-preserving nature of the Hilbert transform method. Finally, we analyze the RMS values of the transformed signals to show how the Hilbert transform method can create near 1:1 aspect ratio Cardioid plots with very little distortion for any patient data.

Page generated in 0.0495 seconds